top of page

41 results found with an empty search

  • Process Modelling & Simulation: Calibrated system infrastructures with friendly-to-use, intuitive human-centric interfaces (Part 3 of 3) | Funis Consulting

    < Back Process Modelling & Simulation: Calibrated system infrastructures with friendly-to-use, intuitive human-centric interfaces (Part 3 of 3) 30 Apr 2025 Human-centricity in innovation is not just a buzzword. Innovation should serve a purpose. That purpose for us here at Funis Consulting is to do good. Companies have an important role to play in society and here at Funis, we bring together science, technology and innovation for that same purpose. In Process Modelling & Simulation, there is a great deal of science, mathematics, data and technical complexity involved, but the system can be designed with the end user in mind. That is what human-centricity should be about - innovation that works with, around and for people and societies. Once a model is built, you can “play” around with the variables to examine “what-if” scenarios, such as what would happen, or what would my model output be, if variables A, C and G are changed into such and such. Of course, the more complex a system, the more variables there are that can be changed to assess various scenarios. You can run thousands of simulations, changing all inputs over ranges. As you change variables you get to know your system and its limitations as well as its optimised state. You can also model different systems and connect them into one model thus understanding the relationship between processes or systems. In modelling and simulation, sensitivity analysis can also be performed which will help you understand which parameters affect the overall system the most, thus ensuring the core variables that are the most important are retained at an optimised levels at all times. So once a model is built, through various iterations or simulations, you can carry out process optimisation of the overall system or infrastructure. For instance, you can carry out multi-objective optimisation, add constraints to the system, as well as carry out real-time control of your systems. This means you can run continuous system optimisation for real time balancing of the system, just to mention a few. Statistical process control in real time can give you warnings if trends are observed - this helps in forecasting problems before they arise. Modelling & Simulation can be extremely complex behind the scenes but that does not need to feel difficult to use by the end-user. With the correct user interface as well as with proper training and support, such tools can be made intuitive and approachable. Whilst there is a great deal of science, mathematics, data and technology running in the background, the system can be designed to be feel friendly and simple on the surface. Depending on who is using the model, whether for instance your in-house data scientist or your production line machine operator, different users will need different insights, or sometimes the same insights presented in different ways, with more or less detail. The look-and-feel therefore can be adapted to the needs of its users by building different UIs, and showing data in different ways, or even showing only the data which is relevant the person viewing it. Although data, mathematics, science and technology involve a lot of complexity, here at Funis Consulting, we believe in innovation that serves a purpose. Here at Funis, our aim is to deliver smart, tailored solutions that bring real value to businesses and society alike, always designed with the end-user in mind. Previous Next

  • From Chemistry to Code: How Modelling, Simulation and Data Science are Transforming Formulation R&D | Funis Consulting

    < Back From Chemistry to Code: How Modelling, Simulation and Data Science are Transforming Formulation R&D 10 Sept 2025 Formulation R&D is evolving. Traditional trial-and-error approaches are no longer enough to keep pace with rising costs, tighter regulations, and growing sustainability targets. Computational modelling and data science make it possible to explore molecular interactions virtually, optimise formulations, and predict outcomes more efficiently. By combining chemistry with data, even smaller teams can innovate smarter, developing products that are more effective, sustainable, and aligned with modern expectations. At the heart of every consumer product, whether food, cosmetics, personal care, or household goods, lies chemistry. Formulation is the science of making ingredients work together: stabilising emulsions, controlling crystallisation, fine-tuning viscosity, balancing actives, and designing textures, aromas, or cleaning performance. For decades, new products have been developed through trial and error in the lab or pilot plant. Scientists experiment, tweak, and test until a stable, effective, or appealing formula emerges. But in today’s environment, this traditional approach is often too slow, too costly, and too uncertain. The pressures are clear. Consumers expect products that deliver functionality, safety, and sensory appeal while also being healthier, gentler, or more sustainable. Competitors move quickly, and faster innovators often capture both shelf space and consumer loyalty. Meanwhile, volatile raw material costs, rising energy prices, and the expense of running iterative formulation trials drive the need for more efficient R&D. Regulations governing ingredients and safety are becoming increasingly complex, especially for chemicals and additives. At the same time, ambitious sustainability targets push companies to reduce environmental impact, optimise resources, and replace legacy ingredients without compromising performance. This is where modelling, simulation, and data science redefine the rules of formulation. Instead of relying purely on bench experiments, companies can now test, optimise, and predict product behaviour in silico. Consider the role of chemistry at the microscopic level: surfactants arranging at oil-water interfaces, polymers creating networks that affect viscosity, proteins folding and unfolding, fats crystallising into different structures, or volatile molecules driving aroma. These interactions determine whether a cream remains smooth, a sauce stays stable, a detergent dissolves effectively, or a shampoo delivers the right foam and feel. Traditionally, understanding these behaviours meant months of iterative testing. Now, computational models can simulate these same interactions virtually. Stability over shelf life can be predicted; ingredient compatibility mapped; formulation robustness stress-tested under different conditions. Optimisation becomes faster, as algorithms can explore thousands of compositions long before a single sample is mixed. Even sensory and functional attributes such as flavour, fragrance, mouthfeel, spreadability, cleaning efficacy can be linked directly to underlying chemistry using statistical and machine learning approaches. So how does this translate into real advantages for manufacturing companies such as FMCG and CPG companies? Most generate vast amounts of data, from lab instruments, formulation databases, pilot plant trials, production lines, and consumer testing. Yet this information often remains fragmented and underutilised. Data science brings it together, combining experimental data with chemical knowledge to build predictive models. These models not only explain why certain formulations behave as they do but also forecast how new combinations will perform. This reduces dead ends, shortens development cycles, and increases confidence when scaling up. Crucially, advances in computing now make such tools accessible to small and mid-sized enterprises as well as multinationals. Working with specialists allows R&D teams to focus on creativity and innovation while computational methods handle the complexity of formulation space. Adopting these techniques requires a mindset shift. Modelling and data science do not replace chemistry and formulation expertise; they amplify it. Chemistry provides the governing rules, while computation offers the means to explore, optimise, and innovate at speed and scale. Together, they enable companies to design products that are more effective, more sustainable, and better aligned with consumer expectations, without the heavy cost of endless trial-and-error. In today’s fast-moving CPG sector, formulation R&D is no longer confined to mixing and measuring in the lab. It is evolving into a powerful interplay between chemistry and computation, where smarter, faster, and more confident innovation becomes possible. Previous Next

  • Data: Patterns and Clusters in Visualisation (Part 2 of 2) | Funis Consulting

    < Back Data: Patterns and Clusters in Visualisation (Part 2 of 2) 09 Apr 2025 When working with large datasets, visualisation is key to gaining insights. This is also important when presenting to other business stakeholders. It makes all the difference when data is presented in a clear and meaningful way. Complex datasets do not need to be overwhelming. Today we explore the concept of clustering - how to identify patterns in unstructured and unlabelled data. Data Collection is an important part of data analysis and visualisation. If you collect your data in a wrong way, it can lead to a misleading interpretation. Data needs to then be sorted out. The same type of data is then placed together. In the LEGOs image below, you see the LEGO pieces of different colours grouped together. Not only you need to sort out the data, but also arrange them, i.e. for instance convert the data so that the data is uniform and can be compared and used e.g. formatting, unit conversion etc. Data is then presented in a way which is understandable to analytical and non-analytical internal (and possibly external) stakeholders. Remember, in an organisation, some functions, who might not be analytical in nature, might need to be able to read and understand the data for strategy and/or decision-making. Once data is presented visually, it needs to be analysed and explained, and hence one can reach an outcome. Image by Mónica Rosales Ascencio from Linkedin There are many visualisation methods one can use, from bar charts to scatter plots, but let’s take a more scientific approach to visualisation, which is mostly used when you have large unstructured datasets to work with; Clustering visualisation. Supervised clustering is when you are grouping your data according to datapoints which you have defined. These datapoints are defined by understanding and finding a pattern or common element, in unstructured and unlabelled datasets. So let’s take an easy example and imagine that we have the following data: Cat, Dog, Kitchen, Donkey, Sofa, Wardrobe, Door, Table, Horse, Bird, Chair. We immediately understand there are two Clusters which are furniture (let’s call it Cluster A) and animals (Cluster B). So, all of the above data will be grouped around the datasets we established, either Cluster A for furniture or Cluster B for animals. If to this data I add a candleholder, then this data will be somewhere outside of the range of these clusters, because it is neither furniture nor an animal, however it will be closer to Cluster A (furniture) than it is to Cluster B (animals). If then we add a glass bowl to the dataset, this too, like the candleholder, would be outside of the range. Having said that, the glass bowl might be slightly further away from Cluster A, then a candleholder would be. This is because a domestic fish could live in a glass bowl and so there is a linkage, albeit not a strong one, there. The above is a simple sketch, showing Cluster A with cyan datasets (furniture) around it. Cluster B is showing with purple datasets (animals) around it. The yellow dot in the middle is the candleholder and the orange dot is the fish bowl. Understanding a pattern is crucial when attributing data points for clusters. In machine learning for instance, clustering is about grouping raw data. There are many applications for clustering across many industries, from fraud detection in banking and anomaly detection in healthcare, to market segmentation and many more. Let’s take another simple example of how to make sense of unstructured data. Imagine we are to analyse a phone numbers' list and receive this data: 729698782172106674475298921152340587 What we know for sure is that phone numbers will start either with 7 (in case of a mobile phone number) or with 5 (in case of a landline). Mobile numbers and phone numbers are of different lengths, but each type will always contain the same number of numbers. Furthermore, the area code (normally found in the first few digits of a phone number) has to be a common number, since this data comes from the same geographical area. Looking at the data, we have identified that the only common number in the above which follows either a 5 or a 7 is the number 2. We identified an equal length to both the mobile numbers (10 digits) and the phone numbers (8 digits). With this knowledge we can split and structure the datasets as below - the first two from the below list are mobile phone numbers and the second two are landline numbers. 7296987821 7210667447 52989211 52340587 The larger and more complex the data, the more important it is to visualise it. If you have lots of data to show for interpretation, you simply have to visualise it to make sense of it. Visualisation is simply the key to let your data help you and to make your data count. Previous Next

  • Of Force Fields and Simulations: Whether it’s All-Atom (AA), United Atom (UA) or Coarse-Grained (CG), a good Force Field is a cornerstone in Molecular Dynamics | Funis Consulting

    < Back Of Force Fields and Simulations: Whether it’s All-Atom (AA), United Atom (UA) or Coarse-Grained (CG), a good Force Field is a cornerstone in Molecular Dynamics 18 Jun 2025 Not all Force Fields are created equal and in Molecular Dynamics your results are only as good as the Force Field behind them: the set of rules how atoms move, bond and vibrate! Whether you go all-in with an All-Atom (AA) Force Field or speed things up with a Coarse-Grained (CG) approach, choosing the right Force Field is crucial for a delicate balance of accuracy, efficiency, detail and scale. There is no such thing as a universal Force Field, which can be applied for everything. However, a well-chosen, well-tested Force Field? Now that is what turns a simulation into a real insight. So, choosing a good Force Field is paramount for success. In Molecular Dynamics, the accuracy of the results of your simulation depends entirely on the quality of the model that you are using. This means that the equations and the parameters describing how the system behaves needs to be of a high quality within the model that you are using. These equations and parameters are what makes up the Force Field. A Force Field consists of two main parts: the mathematical function (equation) which estimates potential energy (like how atoms bond or repel each other) and the parameters used within those functions. These methods fall under molecular mechanics because they only take into account the positions of atomic nuclei, ignoring the more complex behaviour of electrons and such simplification makes the Force Field simulations much faster than quantum mechanical ones, yet producing impressively accurate and precise results. There are a number of Force Fields out there, but none of them are a one-size-fits all kind of Force Field. Therefore, we have different Force Fields designed for different purposes such as if we want to simulate small organic molecules, proteins, lipids or polymers, or different environments such as water, membranes or vacuum. Terms such as all-atom (AA), united atom (UA) and coarse-grained (CG) Force Fields denote the level of detail that the Force Field works with. AA Force Fields simulate every single atom, giving you fine detail but at a higher computational cost. UA Force Fields on the other hand simplify things by grouping aliphatic hydrogens with their carbons therefore reducing the total number of particles, while CG Force Fields takes it a step further by grouping several atoms together (e.g., three carbon atoms and their hydrogens) into what’s called a single “bead” or superatom. Going from AA to UA to CG what you’ll do is that you will lose the detail but on the other hand gain huge improvements in computational power, therefore making CG methods especially useful when dealing with large systems. Such systems could be simulating the behaviour of thousands of molecules, each with hundreds of atoms, making running a detailed AA simulation impractical. There are plenty of Force Fields to choose from and popular ones include OPLS-AA , OPLS-UA , AMBER , CHARMM , MARTINI and COGITO just to name a few. Which one to go for very much depends on your system, your goal and how long you would like to wait for the results. Given that no one Force Field can work for everything. Some are more versatile than others, but in most cases you will need to test and validate your chosen Force Field, ideally by checking whether it can reproduce known experimental results before diving into your full simulations. In the end a Force Field is a powerful, yet simplified tool. Even when using basic models such as for instance bond stretching with Hooke’s law, it can still provide a surprisingly accurate picture of the real system. One of the key strengths of a good Force Field is transferability. This means that the Force Field should not only perform well on specific molecules it was built for but also on related or larger systems. This is what makes a good Force Field a valuable cornerstone of molecular simulation. Previous Next

  • Making sense of Flow: How Computational Fluid Dynamics (CFD) can help bring fluid behaviour to life | Funis Consulting

    < Back Making sense of Flow: How Computational Fluid Dynamics (CFD) can help bring fluid behaviour to life 21 May 2025 Have you ever wondered how fluids move in buildings, vehicles or water systems or how these can be designed to be more efficient? That is where Computational Fluid Dynamics (CFD) comes in, as it enables us to simulate fluid flow virtually, on a computer. By enabling something which does not exist to exist in a virtual space, it enables us to spot a problem, test ideas and optimise designs and this is before anything physical is build. Here at Funis Consulting, we use CFD to make the invisible visible….From airflow, to heat, to pressure; we create this for you so that innovation can be done with confidence, in a safe environment, all the while designing smarter and saving energy. Computational Fluid Dynamics, commonly referred to by its acronym CFD, is a powerful way to understand how fluids (liquids and gases) behave. It's used in all sorts of industries, from designing aircraft and buildings, to predicting weather patterns, planning cities, improving water systems, or even understanding how pollutants spread in the air or sea. At its heart, CFD is about creating a virtual environment where we can explore how fluids behave before anything is built or tested in the real world. Instead of jumping straight into expensive physical experiments or prototypes, scientists and engineers can simulate different scenarios on a computer. This lets them spot potential problems, make improvements, and fine-tune designs safely and efficiently. It works using a set of equations that describe how fluids move and respond to things like pressure, temperature, and gravity. These equations might be complex under the hood, but what matters is the outcome: they allow us to visualise flow patterns that we could never see otherwise. You can zoom into the tiniest detail of a system and see where energy is being wasted, where pressure builds up, or where the design could be made more efficient. That kind of insight can make a big difference, whether it’s in making a car more aerodynamic, improving the way a ventilation system moves air, or reducing energy waste in a heating system. Thanks to advances in computing power, artificial intelligence, and machine learning, CFD is becoming even more accessible and effective. We’re seeing incredible developments, from digital twins to real-time simulations, i.e., virtual replicas of physical systems that update in real time. These innovations help us design smarter, more sustainable solutions and give us the tools to prepare for the challenges of the future. What makes CFD so exciting is not just the depth of understanding it offers, but the flexibility and speed it brings. Simulations can be run in parallel, saving time and cost, while providing detail and precision that would be difficult, or impossible, to capture through physical testing alone. And because you’re working in a virtual space, there’s far less risk involved. Imagine testing how a rocket performs under extreme conditions or how a pipe might deform under pressure, all without leaving the computer. Let’s take a simple example. Imagine you're designing an oven and want to ensure that it heats food evenly. One of the biggest challenges in oven design is understanding how hot air circulates inside the chamber. This is where CFD becomes a valuable tool. To begin, you create a digital 3D model of the oven. This model includes all the important features: the heating element (which could be a coil, a fan, or both), the oven walls, and even the tray or rack that might hold food. CFD then divides the inside of the oven into many tiny 3D blocks called a mesh. These blocks help simulate how air and heat behave in very small regions of the oven, allowing for a detailed analysis of the entire space. Next, you define the operating conditions. You tell the simulation where the heat is coming from, what temperature the walls should be, whether a fan is blowing air around, and whether there is an object (like a loaf of bread or a cookie, or maybe you’re curing some components) sitting on a tray that could block or change the flow of air. Once the setup is complete, you run the simulation. The software calculates how hot air moves through the oven, how it rises, circulates, and cools. It shows how the heat transfers from the heating element to the air and then from the air to the food. It also identifies areas where air moves slowly or forms swirls, which can lead to uneven cooking. The results are visual and intuitive. You might see a colour map of the oven interior, with red areas showing where it’s hottest and blue areas where it's cooler. You could also view arrows that represent air movement, helping you understand whether the hot air is reaching all corners of the oven or if there are dead zones where it stagnates. By using CFD in this way, you can spot problems in your oven design early. You might find that moving the fan or reshaping part of the interior leads to better air circulation. Ultimately, CFD helps you design ovens that cook food more evenly, heat up faster, and use energy more efficiently. In short, CFD lets us explore the invisible, fix problems before they arise, and build with greater confidence. It’s a behind-the-scenes hero in the world of science and engineering, quietly helping to shape a safer, cleaner, and more efficient future. Previous Next

  • Harnessing the Power of Optimisation | Funis Consulting

    < Back Harnessing the Power of Optimisation 12 Mar 2025 We have all been there, seeing a process and thinking, there must be a better way to do this, even achieving a better, more accurate output. It could be a software flow, a manual process or even an entire system, optimisation helps businesses in finding and implementing improvements resulting in a huge impact to the business. Certain processes can be far too complicated when they do not need to be. This means that resources’ time is wasted leading to sub-optimal productivity within a Company. The more complicated processes are, the higher the risk of human errors and setbacks, thus holding Companies back from moving projects and innovation forward, and focusing on what really matters. Every system has its own pace, but when inefficiencies start to negatively affect a Company, it is a good idea to pause for a moment and take a closer look at the different components and tools in place and see where optimisation can make a real change to your business. Process Optimisation can truly help business make that transformation, enabling teams to focus and spend their time and energy on what’s important. Optimisation can bring about a number of benefits to companies and can be used across all sectors, be it public policy, governmental planning, pharmaceutical, biotechnology, transportation, mobility services, manufacturing and operations, FMCG, supply chain and logistics, healthcare, medical applications and finance, just to name a few. To understand Optimisation one has to first understand Predictive Modelling. In Predictive Modelling, as long we know the input x , the relationship between x and y (or f(x)), we are able to predict the output, y . You might be familiar with the below example from your school days, which illustrates the equation of a straight line; y=mx+c , where m is the gradient (or slope) and c is the intercept. In process optimisation m and c could be your process settings. Here, by knowing x and f(x) ,you are able to predict the output, y . Graph showing correlation between x and y So, taking the example above, Optimisation comes in when you need to know m and c , by knowing your input ( x) and what you want to get out ( y) . Therefore, starting from the desired output ( y ), a known variable, we need to understand the relationship between x and y , i.e. f(x) , which are unknowns. We do this by utilising the data that is known by us. Therefore, Optimisation is when you find out what variables you need to deploy and in what manner, in order to get to the desired result or output. Optimisation works by attempting various iterations or value changes in the unknowns (in this case, m and c, our process settings), and varying these until we reach what is called a zero loss (0 Loss) and hence achieve the desired output, y. In this way, we are discovering the parameters needed to get to the desired y. Optimisation can be single-objective or it can be multi-objective, with the latter having more complexity which might make obtaining a 0 Loss very difficult. In such cases, one finds what is called the global minimum, which essentially is the closest possible to a 0 Loss scenario. In Optimisation, a specialised algorithm is used to run the simulations, according to a set of chosen rules and weights attributed to the different rules. Let’s take for instance a multi-objective process Optimisation in a manufacturing setting. Imagine a number of different ingredients which need to be combined together, each bearing different pricing, processing time, and various constraints. A specialised algorithm helps in determining the variables and how these are to be deployed in order to get to the desired product / output. So, this means the best possible product, manufactured within a certain time, cost and of a certain quality. With a Random Sampling technique, when working on such large number of variables and permutations, the higher the number of samples or iteration runs, the closer you get to a 0 Loss, and therefore the more accurate the output. This however leaves the probability of finding the global minimum up to chance. With a Bayesian Optimisation technique we can reach the global minimum in a much more focused manner, taking many less iterations to do so, especially in a multi-variate scenario, making it a more preferred method for Optimisation. Previous Next

  • The Importance of Choosing the Right Visualisation for your Data and your Audience. | Funis Consulting

    < Back The Importance of Choosing the Right Visualisation for your Data and your Audience. 24 Sept 2025 Data visualisation isn’t about creating pretty pictures but it’s about making data meaningful. The right choice of visual can reveal patterns, trends, and insights, while the wrong one risks confusion or misinterpretation. By tailoring visualisations to both the dataset and the audience, and designing with inclusivity in mind, we turn numbers into clarity that drives better decisions. In times driven by data and analytics, especially when it comes to major decision-making, the correct interpretation of the data is important, which can be greatly influenced by how we present it. The best tool, no matter how good it is, is irrelevant if it cannot be used. Similarly, a dataset, no matter how good or detailed it is, loses its value if it cannot be understood by the people who need to use it. Therefore solutions need to be tailored to the dataset in question, as well as to the audience or users who will make use of the solution in their day to day work. The right visualisation can reveal trends, habits, relationships and outliers which would otherwise remain hidden in data which is not visualised in the right manner. The wrong choice of visualisation can confuse, mislead, misdirect or alienate the very people that need to understand the data. So, think about it. Not only does the wrong choice of visualisation make your dataset incomprehensible but it can actually allow for misinterpretation, something that you would not want! Let's talk about the audience for a moment because the audience in visualisation matters a lot. Datasets normally have multiple stories to tell, so the role of visualisation is to make those stories as clear as possible to the intended audience, which could range from analytics experts, who might prefer complex plots such as box plots, heatmaps or network graphs to capture complex patterns and nuances, to non-technical stakeholders such as managers, policymakers and consumers who might benefit from simple, fast-to-read, more intuitive visuals such as bar charts or line graphs. Inclusivity is very important because a visualisation tool should not assume that every user has the same level of statistical or technical literacy. For instance, a red and blue heatmap is great for an analytical expert unless he/she is colourblind, in which case the heatmap could be done in greyscale (black & white) so that it can be easily read by people who are colourblind. Clear labelling, accessible colour schemes and interactive features can ensure that people from different backgrounds are able to draw meaning from the same data. The dataset type most often will dictate the most suitable visualisation approach. For example, for categorical data such as product types, demographics and survey response the best visualisations are bar charts and column charts because these highlight proportions and comparison between discrete categories. On the other hand, for time series data such as sales figures over months, stock prices and sensor readings the best visualisations are line charts and area charts as these show trends, patterns and seasonality over time. For geospatial data such as customer locations, climate zones and logistic routes, the best visualisations are maps, choropleth maps and bubble maps as these add a spatial dimension making it easy to spot regional variations or clusters. For hierarchical maps, such as company structure and product families the best visualisations are treemaps and sunburst charts as these capture relationships and proportions with layers. For relational data such as social networks, process connections and supply chain the best visualisation are normally network graphs and sankey diagrams as they show interactions, dependencies and flows. Distributions such as customer ages or processing times are best visualised through histograms, box plots and violin plots to show variability, central tendencies and outliers. And for multivariate data such as for instance when comparing product performance across multiple metrics, this is best visualised through scatter plots, bubble charts and parallel coordinate plots since they allow the users to explore relationships between multiple variables at once. Accessibility should not be an afterthought. If your data tool is going to be accessed by stakeholders of different technical and/or analytical abilities then it is important that this is kept in mind at all stages when designing the tool. The tool should have clarity and jargon is to be avoided where possible. When colour palettes are involved ensure colour accessibility. Interactivity in the design such as the ability to zoom in or to filter as well as highlighting what matters as well as consistency throughout the various layers/stages of the tool are also important. So, data visualisation is not about placing the numbers neatly and pretty on a graph as a way to decorate a powerpoint presentation during a meeting nor to impress senior management with data overload. It does not work that way. Visualisation is an important choice to make as the plot can make a difference between insight and misunderstanding. By matching the visualisation to your dataset and audience as well as by designing with inclusivity in mind, we create tools that empower people and businesses to make better decisions. Previous Next

  • Of Randomness, True Probability and Simulation | Funis Consulting

    < Back Of Randomness, True Probability and Simulation 08 Oct 2025 Flipping a coin shows that randomness has variability: 10 tosses rarely give exactly 5 heads and 5 tails, but 100,000 tosses approach the expected 50:50 ratio. Each toss is independent, so short-term deviations don’t “self-correct.” Larger sample sizes reduce noise and reveal underlying probabilities. With more complex outcomes, like dice or natural phenomena, the true distribution only emerges after many trials. Simulation lets us explore uncertain systems, approximate distributions, and make reliable predictions by running many iterations with varying inputs. When you flip a coin, there’s a 50–50 chance of getting heads or tails. If you only flip the coin ten times, it’s actually pretty unlikely that you’ll end up with exactly five heads and five tails. Do it 100,000 times, though, and you’ll land very close to 50,000 of each. That’s the law of large numbers in action: the more times you run the experiment, the closer the results get to what you’d expect. This happens because randomness involves variability. Taking the coin example, expecting exactly 5 heads and 5 tails for 10 flips is not realistic. Each flip is subject to a random variation and with such a small number of tosses, the effect of this variation is relatively large! However, if we repeat the coin flip many times, say 100,000 times, the relative frequency of heads and tails tends to settle near the true probability of 50:50. Therefore, over the long run, with a sufficiently large sample size, random deviations tend to cancel out. This variation is also called "noise". With only 10 flips, the proportion of heads is a rough noisy estimate of the true probability. On the other hand with 100,000 flips, the noise shrinks and so the proportion stabilises at 50%, reflecting true probability of 0.5. It’s also somewhat counterintuitive: after 10 flips, even if you get exactly 5 heads, it’s a myth to assume that the next flip is “due” to be tails. Each toss is independent, with a 50:50 chance of landing heads or tails. There is no short-term mechanism forcing the outcomes to balance out. Randomness doesn’t self-correct in the short run. A run of tails does not make heads more likely in the next toss. The coin flip example is simple because we already know the underlying distribution. Each flip has a 50% chance of being heads or tails. But what happens when the underlying distribution is unknown, or multiple factors influence the outcomes? A classic case is the normal (Gaussian) distribution. The classic example is that of taking children’s heights. If you take the height of 2 children, they will most likely be different, and completely unrepresentative of the average height of children their age. However, if you take the height of 1,000 children of the same age, the familiar bell curve starts to emerge, with most children being of average height, and much fewer children being at the shorter and taller ends of the height range. This all means that the more trials you conduct, the closer your observed outcomes get to the true distribution. Normal distributions appear both in data and in nature, because many small, independent effects tend to combine into a bell curve. Things get really interesting when we move beyond a simple coin—where there are only two possible outcomes—to situations with many possible outcomes or where the result is influenced by multiple independent factors. Take a die, for example. If I were to roll it 100,000 times, each face, 1 through 6, would appear roughly the same number of times. But if I only rolled it ten times, the distribution would likely be uneven, and you wouldn’t see the actual (flat) distribution because the sample size is too small. This is where simulation becomes powerful. When outcomes are uncertain or influenced by many factors, simulation lets you run multiple iterations with varying inputs. By doing so, you can approximate the true distribution and make more reliable predictions about future outcomes. Essentially, it allows you to explore a wide range of possibilities in a safe, controlled space and understand what is likely to happen under different scenarios. Previous Next

  • Discrete Event Simulation Meets Process Modelling | Funis Consulting

    < Back Discrete Event Simulation Meets Process Modelling 27 Aug 2025 When a production line hits its limits, it is tempting to invest in more equipment but what if the real solution is hidden in the process? By combining Discrete Event Simulation (DES) with process optimisation you can identify bottlenecks, reduce idle time, improve flow and resource use and test scenarios without disrupting operations. In many cases, you will see throughput gains and cost savings without adding a single machine! This is because sometimes the smartest investment is simulation. Process modelling tells you how something works. It focuses on what happens such as unit operations, material transformations, mass and energy balances and quality parameters. Discrete Event Simulation (DES), on the other hand tells you when, how often and for how long it works (or doesn't). DES focuses on when it happens and looks at things such as queue times, delays, capacity utilisation, operator shifts and schedule adherence. So, on their own, each of the methods offer insight, but combined they create a more complete picture of dynamic systems in food and manufacturing operations. This is because you can simulate not only the steps in your process but the timing and resource implications under different conditions. Some examples are when simulating a bottling line where temperature profiles (process model) interact with shift patterns and downtimes (DES) or when modelling a bakery's dough fermentation (process kinetics) alongside batch scheduling and equipment cleaning cycles. It can also help evaluate how small changes in batch size affect overall equipment effectiveness, energy use and delivery times. This hybrid approach is increasingly useful for digital twins, capacity expansion planning and investment decisions, especially in food, where shelf life, throughput and variability make timing critical. Insight emerges not just from what happens but from when it happens and to what extent and that is where process models and DES complement each other. Let's take the example of a biscuit production line where we want to improve throughput while reducing energy use and bottlenecks. The current situation is that you have a production line with dough mixing, baking, cooling and packaging stages. The oven is a bottleneck, running at maximum capacity, and so packaging machines are often idle waiting for a product and as a result energy use spikes due to a "stop-start" behaviour. So, DES in this case simulates the line and models every step including the details such as batch sizes, machine speeds, delays and shifts. It shows where and when queues form, machines go idle, or capacity is underused, and it tracks performance metrics like overall equipment effectiveness, throughput and idle time. Process optimisation on the other hand, when applied, adjusts batch size, buffer sizes and timing to reduce idle time, it tests different staff schedules or machine speeds and it identifies oven usage patterns that smooth out energy consumption. The result may be increased throughput without new equipment, reducing packaging downtime, lower energy costs by optimising baking cycles and data-backed confidence in changes before implementing on-site. Previous Next

  • Smoothed Particle Hydrodynamics | Funis Consulting

    < Back Smoothed Particle Hydrodynamics 06 Aug 2025 In R&D, systems are rarely neat. Irregular flows, soft solids, and messy boundaries are often the norm making traditional Computational Fluid Dynamics (CFD) a poor fit. Smoothed Particle Hydrodynamics (SPH) offers a flexible, mesh-free alternative which is ideal for modelling the complexity we intuitively understand but struggle to simulate. In R&D, the systems we want to model are rarely clean or convenient. You have irregular boundaries, shifting phases, soft solids, and chaotic flows and this is quite often the norm! While traditional Computational Fluid Dynamics (CFD) has its place, it’s not always a comfortable fit, especially when the system doesn’t want to behave like a neat little mesh. And this is where Smoothed Particle Hydrodynamics (SPH) offers something genuinely useful in such cases. SPH was originally developed for astrophysics, but is now being applied across engineering, biophysics, and even food science. It’s a mesh-free computational method that treats matter as a collection of discrete particles. These particles interact through smoothing kernels, allowing the method to capture the nuances of deformable materials and complex flows, without the constraints of a predefined grid. Many of the problems that industrial R&D teams face involve free-surface flows, splashing, or breakup; multiphase systems like slurries, emulsions, or suspensions, soft, gel-like, or granular materials that don’t behave "neatly" and flow regimes that are non-Newtonian or highly localised. In other words, R&D teams increasingly face challenges of systems that are difficult to model using Computational Fluid Dynamics (CFD) approaches. Hence one can explore SPH in contexts where flexibility and physical intuition matter more than rigid formulations or high-fidelity turbulence models, such as for instance in food science and technology with pastes, emulsions and powder-liquid interactions. It can also be used in Materials Science such as in soft solids, gels and composites or in bioprocessing with slow flows, yield-stress fluids and phase interactions as well as in environmental processes such as in sedimentation, erosion and pollutant spread. These are just a few of the applications that SPH can be used for. Notwithstanding the above, SPH is not a silver bullet. For large-scale simulations, SPH can be computationally heavy. Furthermore, it takes experience to tune things like kernel size and particle density effectively. But when the goal is to gain insight into complex, deformable, and dynamic systems, it often outperforms more conventional options, especially when you want models that reflect the system’s quirks rather than smoothing them away. A lot of R&D teams have a deep understanding of their processes, empirical knowledge, pattern recognition, hands-on experience but don’t always have tools that can express that complexity. SPH offers a bridge between what people know intuitively and what can be represented computationally. It’s just versatile enough to model what really matters. Previous Next

  • Data: The Importance of Data for Strategy and Decision-Making (Part 1 of 2) | Funis Consulting

    < Back Data: The Importance of Data for Strategy and Decision-Making (Part 1 of 2) 02 Apr 2025 Data in its raw form can be very powerful to you and your business if you use it well. Otherwise, it can be overwhelming to manage as well as misleading if not properly prepared, transformed and interpreted. What is also more important is that you are interpreting the data correctly, otherwise it can cause more harm than good: data transformation and interpretation therefore is key to make correct informed, strategic choices. Data, in its raw form, is useless, unless you transform it and interpret it. That is when you start reaping the benefits of the data that you hold. It helps organisations and professionals to take the right decisions for their business or for their clients. So, a bunch of text, numbers or images all brought together without any structure say absolutely nothing. It is simply data overload to no effect, where you risk being lost in data, or, even worse, misinterpreting it. It is only upon the correct transformation of data into readable information that we can start truly “seeing” what the message behind the data is. This is the interpretation of data. When analysing large datasets, data is likely to come from different sources, which makes this exercise even more complex. For example, imagine we want to analyse the major news events taking place over the past ten years, utilising different sources: for instance, various social media, various news websites, forums, blogs. What you need to do is to put different data types in different databases. That way you have a comprehensive dataset from various sources which you can then link together. Structuring data is therefore essential when combining data from different sources. There are many tools out there that can help to do this. Python is one of the preferred tools for data scientists, helping with structuring and cleansing the data. Cleaning of data can take many forms, from changing formatting to unit conversions to more complex algorithm-based techniques to filter out unwanted data, outliers and so on. Available libraries in the Python framework, such as NumPy and Pandas greatly help in data wrangling. Machine learning, through the use of algorithms and equations can then start making sense of all this data. Machine learning models can learn on their own from the constant inputs being fed to them, by humans or otherwise. A classic example is spam calls identification. Of course, there isn’t a person stopping these calls before they reach you, but it is an automated machine learning; a complex set of rules built and integrated into a Machine Learning algorithm. This constantly improves and learns on its own by the input made by us humans. So, if a type of call having specific features is being identified as a spam call by a large amount of people, then the algorithm identifies the pattern amongst these different spam numbers that make them classify as spam. So, that is how unsupervised machine learning learns: through the continuous autonomous improvement which is contingent on the reaction and input of us humans. One of the most basic and yet most important factors in data analysis and visualisation is to understand what the goal is… what is the question you are trying to answer? The goal must be specific. Only this way can the analysis and visualisation give you the answer to your question, because we are using the correct dataset to start with, cleansing the data as appropriate, and the using the relevant algorithms to answer the target question. You don’t want to get lost into too much data when it comes to strategic and important decision-making. Decision-making should be based on reliable information. That reliable information must be presented in a way that stakeholders understand: whether they are analytical or not, the data must speak to them. That is why visualisation is much more than making the data look nice and colourful. It is about the ability to present the data in such a way as to answer the goal or question that you have, without any ambiguity and uncertainties, as well as making it easy for other stakeholders to understand the data. Previous Next

  • What bends and what breaks under pressure? And which of the two will happen in a specific circumstance? Finite Element Analysis (or “FEA”) knows best. | Funis Consulting

    < Back What bends and what breaks under pressure? And which of the two will happen in a specific circumstance? Finite Element Analysis (or “FEA”) knows best. 29 May 2025 Whether it is a structure that will hold or crack under various stressors, Finite Element Analysis (FEA) helps to find the answer. When designing buildings and bridges, the integrity of the structure is everything and that is what we aim to find out by using FEA. FEA breaks complex structures into smaller elements to simulate how these behave under stressors, such as load, wind, vibration, earthquakes and so on. It helps to predict weak points and optimise structures or materials. This improves the safety of a structure before it is built. When different forces act on a structure, the outcome might not be obvious. The outcome can’t simply be left to guesswork or high likelihood because failing cannot be an option. Pressure applied on a structure can have a ripple effect across the entire system and through Finite Element Analysis (FEA) we can determine where stress accumulates and in return whether the structure will flex or bend, whether it will fail or break or whether it will hold strong and unaltered, and you don’t want your structure to fail or to break. Having the structure flex or bend, or keeping the structure unaltered, is essential. Both are good, but it depends on the context that you use them in. It is interesting to note that the most resilient of structures are actually the ones the bend. This flexibility allows to absorb the energy, rather than resist it. This flexibility reduces the risk of failure. If you think of a willow tree swaying in the wind, you see that it is adapting to, rather than resisting the wind - it is a case where the structure is flexing, or changing, in order to adapt and not snap. The same concept applies to car bumpers, as they absorb the impact, and change shape with collision, or with cables in a suspension bridge under load, or when the wing of an airplane is vibrating as it is working with the conditions and adapting its form. It is the ability to give a little when under pressure that will allow them to hold a lot. Having said that flexibility is not always the answer. There are some contexts where rigidity is key and here the structures must maintain their exact shape at all the time, no matter the stressors imposed. Some examples of these are support beams in a building, mounting brackets for heavy equipment or surgical tools. Even the minimal movement could lead to dangerous failure and not even the slightest of movement or flexibility is allowed here. The two concepts are useful to increase the hold during stressors leading to higher resilience. However, one can’t thrive in the other’s environment. You can’t use mounting brackets’ material for the wings of airplanes, and on the other hand, you can’t use car bumpers’ material for support beams. This is the beauty of FEA. With Finite Element Analysis, we can simulate how a structure will behave under specific physical conditions before anything is physically built. It works by dividing the design of the structure into small manageable elements, where this modelling technique will help us to predict how a material or a part of it, will respond to stressors, whether these are strains, vibration, heat, pressure and so on. Let’s take earthquake-prone areas for example. In these settings, buildings and infrastructure must be strong enough to withstand dynamic forces while flexible enough to avoid cracking under sudden shifts. If the material is too rigid, then it can snap and break. If the material is too soft, it may collapse. There is a delicate balance which needs to be respected and that is the beauty that FEA can help us find. Using modelling techniques such as FEA supports smarter, safer and more sustainable designs. This is because whether the need is to stay perfectly rigid or flex under pressure - the methodology enables an understanding of the materials and structures long before they exist in the real world, therefore enabling more confidence in the safety of your structures. Previous Next

bottom of page