top of page

41 results found with an empty search

  • Process Modelling & Simulation: Calibrated system infrastructures with friendly-to-use, intuitive human-centric interfaces (Part 3 of 3) | Funis Consulting

    < Back Process Modelling & Simulation: Calibrated system infrastructures with friendly-to-use, intuitive human-centric interfaces (Part 3 of 3) 30 Apr 2025 Human-centricity in innovation is not just a buzzword. Innovation should serve a purpose. That purpose for us here at Funis Consulting is to do good. Companies have an important role to play in society and here at Funis, we bring together science, technology and innovation for that same purpose. In Process Modelling & Simulation, there is a great deal of science, mathematics, data and technical complexity involved, but the system can be designed with the end user in mind. That is what human-centricity should be about - innovation that works with, around and for people and societies. Once a model is built, you can “play” around with the variables to examine “what-if” scenarios, such as what would happen, or what would my model output be, if variables A, C and G are changed into such and such. Of course, the more complex a system, the more variables there are that can be changed to assess various scenarios. You can run thousands of simulations, changing all inputs over ranges. As you change variables you get to know your system and its limitations as well as its optimised state. You can also model different systems and connect them into one model thus understanding the relationship between processes or systems. In modelling and simulation, sensitivity analysis can also be performed which will help you understand which parameters affect the overall system the most, thus ensuring the core variables that are the most important are retained at an optimised levels at all times. So once a model is built, through various iterations or simulations, you can carry out process optimisation of the overall system or infrastructure. For instance, you can carry out multi-objective optimisation, add constraints to the system, as well as carry out real-time control of your systems. This means you can run continuous system optimisation for real time balancing of the system, just to mention a few. Statistical process control in real time can give you warnings if trends are observed - this helps in forecasting problems before they arise. Modelling & Simulation can be extremely complex behind the scenes but that does not need to feel difficult to use by the end-user. With the correct user interface as well as with proper training and support, such tools can be made intuitive and approachable. Whilst there is a great deal of science, mathematics, data and technology running in the background, the system can be designed to be feel friendly and simple on the surface. Depending on who is using the model, whether for instance your in-house data scientist or your production line machine operator, different users will need different insights, or sometimes the same insights presented in different ways, with more or less detail. The look-and-feel therefore can be adapted to the needs of its users by building different UIs, and showing data in different ways, or even showing only the data which is relevant the person viewing it. Although data, mathematics, science and technology involve a lot of complexity, here at Funis Consulting, we believe in innovation that serves a purpose. Here at Funis, our aim is to deliver smart, tailored solutions that bring real value to businesses and society alike, always designed with the end-user in mind. Previous Next

  • From Chemistry to Code: How Modelling, Simulation and Data Science are Transforming Formulation R&D | Funis Consulting

    < Back From Chemistry to Code: How Modelling, Simulation and Data Science are Transforming Formulation R&D 10 Sept 2025 Formulation R&D is evolving. Traditional trial-and-error approaches are no longer enough to keep pace with rising costs, tighter regulations, and growing sustainability targets. Computational modelling and data science make it possible to explore molecular interactions virtually, optimise formulations, and predict outcomes more efficiently. By combining chemistry with data, even smaller teams can innovate smarter, developing products that are more effective, sustainable, and aligned with modern expectations. At the heart of every consumer product, whether food, cosmetics, personal care, or household goods, lies chemistry. Formulation is the science of making ingredients work together: stabilising emulsions, controlling crystallisation, fine-tuning viscosity, balancing actives, and designing textures, aromas, or cleaning performance. For decades, new products have been developed through trial and error in the lab or pilot plant. Scientists experiment, tweak, and test until a stable, effective, or appealing formula emerges. But in today’s environment, this traditional approach is often too slow, too costly, and too uncertain. The pressures are clear. Consumers expect products that deliver functionality, safety, and sensory appeal while also being healthier, gentler, or more sustainable. Competitors move quickly, and faster innovators often capture both shelf space and consumer loyalty. Meanwhile, volatile raw material costs, rising energy prices, and the expense of running iterative formulation trials drive the need for more efficient R&D. Regulations governing ingredients and safety are becoming increasingly complex, especially for chemicals and additives. At the same time, ambitious sustainability targets push companies to reduce environmental impact, optimise resources, and replace legacy ingredients without compromising performance. This is where modelling, simulation, and data science redefine the rules of formulation. Instead of relying purely on bench experiments, companies can now test, optimise, and predict product behaviour in silico. Consider the role of chemistry at the microscopic level: surfactants arranging at oil-water interfaces, polymers creating networks that affect viscosity, proteins folding and unfolding, fats crystallising into different structures, or volatile molecules driving aroma. These interactions determine whether a cream remains smooth, a sauce stays stable, a detergent dissolves effectively, or a shampoo delivers the right foam and feel. Traditionally, understanding these behaviours meant months of iterative testing. Now, computational models can simulate these same interactions virtually. Stability over shelf life can be predicted; ingredient compatibility mapped; formulation robustness stress-tested under different conditions. Optimisation becomes faster, as algorithms can explore thousands of compositions long before a single sample is mixed. Even sensory and functional attributes such as flavour, fragrance, mouthfeel, spreadability, cleaning efficacy can be linked directly to underlying chemistry using statistical and machine learning approaches. So how does this translate into real advantages for manufacturing companies such as FMCG and CPG companies? Most generate vast amounts of data, from lab instruments, formulation databases, pilot plant trials, production lines, and consumer testing. Yet this information often remains fragmented and underutilised. Data science brings it together, combining experimental data with chemical knowledge to build predictive models. These models not only explain why certain formulations behave as they do but also forecast how new combinations will perform. This reduces dead ends, shortens development cycles, and increases confidence when scaling up. Crucially, advances in computing now make such tools accessible to small and mid-sized enterprises as well as multinationals. Working with specialists allows R&D teams to focus on creativity and innovation while computational methods handle the complexity of formulation space. Adopting these techniques requires a mindset shift. Modelling and data science do not replace chemistry and formulation expertise; they amplify it. Chemistry provides the governing rules, while computation offers the means to explore, optimise, and innovate at speed and scale. Together, they enable companies to design products that are more effective, more sustainable, and better aligned with consumer expectations, without the heavy cost of endless trial-and-error. In today’s fast-moving CPG sector, formulation R&D is no longer confined to mixing and measuring in the lab. It is evolving into a powerful interplay between chemistry and computation, where smarter, faster, and more confident innovation becomes possible. Previous Next

  • Data: Patterns and Clusters in Visualisation (Part 2 of 2) | Funis Consulting

    < Back Data: Patterns and Clusters in Visualisation (Part 2 of 2) 09 Apr 2025 When working with large datasets, visualisation is key to gaining insights. This is also important when presenting to other business stakeholders. It makes all the difference when data is presented in a clear and meaningful way. Complex datasets do not need to be overwhelming. Today we explore the concept of clustering - how to identify patterns in unstructured and unlabelled data. Data Collection is an important part of data analysis and visualisation. If you collect your data in a wrong way, it can lead to a misleading interpretation. Data needs to then be sorted out. The same type of data is then placed together. In the LEGOs image below, you see the LEGO pieces of different colours grouped together. Not only you need to sort out the data, but also arrange them, i.e. for instance convert the data so that the data is uniform and can be compared and used e.g. formatting, unit conversion etc. Data is then presented in a way which is understandable to analytical and non-analytical internal (and possibly external) stakeholders. Remember, in an organisation, some functions, who might not be analytical in nature, might need to be able to read and understand the data for strategy and/or decision-making. Once data is presented visually, it needs to be analysed and explained, and hence one can reach an outcome. Image by Mónica Rosales Ascencio from Linkedin There are many visualisation methods one can use, from bar charts to scatter plots, but let’s take a more scientific approach to visualisation, which is mostly used when you have large unstructured datasets to work with; Clustering visualisation. Supervised clustering is when you are grouping your data according to datapoints which you have defined. These datapoints are defined by understanding and finding a pattern or common element, in unstructured and unlabelled datasets. So let’s take an easy example and imagine that we have the following data: Cat, Dog, Kitchen, Donkey, Sofa, Wardrobe, Door, Table, Horse, Bird, Chair. We immediately understand there are two Clusters which are furniture (let’s call it Cluster A) and animals (Cluster B). So, all of the above data will be grouped around the datasets we established, either Cluster A for furniture or Cluster B for animals. If to this data I add a candleholder, then this data will be somewhere outside of the range of these clusters, because it is neither furniture nor an animal, however it will be closer to Cluster A (furniture) than it is to Cluster B (animals). If then we add a glass bowl to the dataset, this too, like the candleholder, would be outside of the range. Having said that, the glass bowl might be slightly further away from Cluster A, then a candleholder would be. This is because a domestic fish could live in a glass bowl and so there is a linkage, albeit not a strong one, there. The above is a simple sketch, showing Cluster A with cyan datasets (furniture) around it. Cluster B is showing with purple datasets (animals) around it. The yellow dot in the middle is the candleholder and the orange dot is the fish bowl. Understanding a pattern is crucial when attributing data points for clusters. In machine learning for instance, clustering is about grouping raw data. There are many applications for clustering across many industries, from fraud detection in banking and anomaly detection in healthcare, to market segmentation and many more. Let’s take another simple example of how to make sense of unstructured data. Imagine we are to analyse a phone numbers' list and receive this data: 729698782172106674475298921152340587 What we know for sure is that phone numbers will start either with 7 (in case of a mobile phone number) or with 5 (in case of a landline). Mobile numbers and phone numbers are of different lengths, but each type will always contain the same number of numbers. Furthermore, the area code (normally found in the first few digits of a phone number) has to be a common number, since this data comes from the same geographical area. Looking at the data, we have identified that the only common number in the above which follows either a 5 or a 7 is the number 2. We identified an equal length to both the mobile numbers (10 digits) and the phone numbers (8 digits). With this knowledge we can split and structure the datasets as below - the first two from the below list are mobile phone numbers and the second two are landline numbers. 7296987821 7210667447 52989211 52340587 The larger and more complex the data, the more important it is to visualise it. If you have lots of data to show for interpretation, you simply have to visualise it to make sense of it. Visualisation is simply the key to let your data help you and to make your data count. Previous Next

  • Of Force Fields and Simulations: Whether it’s All-Atom (AA), United Atom (UA) or Coarse-Grained (CG), a good Force Field is a cornerstone in Molecular Dynamics | Funis Consulting

    < Back Of Force Fields and Simulations: Whether it’s All-Atom (AA), United Atom (UA) or Coarse-Grained (CG), a good Force Field is a cornerstone in Molecular Dynamics 18 Jun 2025 Not all Force Fields are created equal and in Molecular Dynamics your results are only as good as the Force Field behind them: the set of rules how atoms move, bond and vibrate! Whether you go all-in with an All-Atom (AA) Force Field or speed things up with a Coarse-Grained (CG) approach, choosing the right Force Field is crucial for a delicate balance of accuracy, efficiency, detail and scale. There is no such thing as a universal Force Field, which can be applied for everything. However, a well-chosen, well-tested Force Field? Now that is what turns a simulation into a real insight. So, choosing a good Force Field is paramount for success. In Molecular Dynamics, the accuracy of the results of your simulation depends entirely on the quality of the model that you are using. This means that the equations and the parameters describing how the system behaves needs to be of a high quality within the model that you are using. These equations and parameters are what makes up the Force Field. A Force Field consists of two main parts: the mathematical function (equation) which estimates potential energy (like how atoms bond or repel each other) and the parameters used within those functions. These methods fall under molecular mechanics because they only take into account the positions of atomic nuclei, ignoring the more complex behaviour of electrons and such simplification makes the Force Field simulations much faster than quantum mechanical ones, yet producing impressively accurate and precise results. There are a number of Force Fields out there, but none of them are a one-size-fits all kind of Force Field. Therefore, we have different Force Fields designed for different purposes such as if we want to simulate small organic molecules, proteins, lipids or polymers, or different environments such as water, membranes or vacuum. Terms such as all-atom (AA), united atom (UA) and coarse-grained (CG) Force Fields denote the level of detail that the Force Field works with. AA Force Fields simulate every single atom, giving you fine detail but at a higher computational cost. UA Force Fields on the other hand simplify things by grouping aliphatic hydrogens with their carbons therefore reducing the total number of particles, while CG Force Fields takes it a step further by grouping several atoms together (e.g., three carbon atoms and their hydrogens) into what’s called a single “bead” or superatom. Going from AA to UA to CG what you’ll do is that you will lose the detail but on the other hand gain huge improvements in computational power, therefore making CG methods especially useful when dealing with large systems. Such systems could be simulating the behaviour of thousands of molecules, each with hundreds of atoms, making running a detailed AA simulation impractical. There are plenty of Force Fields to choose from and popular ones include OPLS-AA , OPLS-UA , AMBER , CHARMM , MARTINI and COGITO just to name a few. Which one to go for very much depends on your system, your goal and how long you would like to wait for the results. Given that no one Force Field can work for everything. Some are more versatile than others, but in most cases you will need to test and validate your chosen Force Field, ideally by checking whether it can reproduce known experimental results before diving into your full simulations. In the end a Force Field is a powerful, yet simplified tool. Even when using basic models such as for instance bond stretching with Hooke’s law, it can still provide a surprisingly accurate picture of the real system. One of the key strengths of a good Force Field is transferability. This means that the Force Field should not only perform well on specific molecules it was built for but also on related or larger systems. This is what makes a good Force Field a valuable cornerstone of molecular simulation. Previous Next

  • Making sense of Flow: How Computational Fluid Dynamics (CFD) can help bring fluid behaviour to life | Funis Consulting

    < Back Making sense of Flow: How Computational Fluid Dynamics (CFD) can help bring fluid behaviour to life 21 May 2025 Have you ever wondered how fluids move in buildings, vehicles or water systems or how these can be designed to be more efficient? That is where Computational Fluid Dynamics (CFD) comes in, as it enables us to simulate fluid flow virtually, on a computer. By enabling something which does not exist to exist in a virtual space, it enables us to spot a problem, test ideas and optimise designs and this is before anything physical is build. Here at Funis Consulting, we use CFD to make the invisible visible….From airflow, to heat, to pressure; we create this for you so that innovation can be done with confidence, in a safe environment, all the while designing smarter and saving energy. Computational Fluid Dynamics, commonly referred to by its acronym CFD, is a powerful way to understand how fluids (liquids and gases) behave. It's used in all sorts of industries, from designing aircraft and buildings, to predicting weather patterns, planning cities, improving water systems, or even understanding how pollutants spread in the air or sea. At its heart, CFD is about creating a virtual environment where we can explore how fluids behave before anything is built or tested in the real world. Instead of jumping straight into expensive physical experiments or prototypes, scientists and engineers can simulate different scenarios on a computer. This lets them spot potential problems, make improvements, and fine-tune designs safely and efficiently. It works using a set of equations that describe how fluids move and respond to things like pressure, temperature, and gravity. These equations might be complex under the hood, but what matters is the outcome: they allow us to visualise flow patterns that we could never see otherwise. You can zoom into the tiniest detail of a system and see where energy is being wasted, where pressure builds up, or where the design could be made more efficient. That kind of insight can make a big difference, whether it’s in making a car more aerodynamic, improving the way a ventilation system moves air, or reducing energy waste in a heating system. Thanks to advances in computing power, artificial intelligence, and machine learning, CFD is becoming even more accessible and effective. We’re seeing incredible developments, from digital twins to real-time simulations, i.e., virtual replicas of physical systems that update in real time. These innovations help us design smarter, more sustainable solutions and give us the tools to prepare for the challenges of the future. What makes CFD so exciting is not just the depth of understanding it offers, but the flexibility and speed it brings. Simulations can be run in parallel, saving time and cost, while providing detail and precision that would be difficult, or impossible, to capture through physical testing alone. And because you’re working in a virtual space, there’s far less risk involved. Imagine testing how a rocket performs under extreme conditions or how a pipe might deform under pressure, all without leaving the computer. Let’s take a simple example. Imagine you're designing an oven and want to ensure that it heats food evenly. One of the biggest challenges in oven design is understanding how hot air circulates inside the chamber. This is where CFD becomes a valuable tool. To begin, you create a digital 3D model of the oven. This model includes all the important features: the heating element (which could be a coil, a fan, or both), the oven walls, and even the tray or rack that might hold food. CFD then divides the inside of the oven into many tiny 3D blocks called a mesh. These blocks help simulate how air and heat behave in very small regions of the oven, allowing for a detailed analysis of the entire space. Next, you define the operating conditions. You tell the simulation where the heat is coming from, what temperature the walls should be, whether a fan is blowing air around, and whether there is an object (like a loaf of bread or a cookie, or maybe you’re curing some components) sitting on a tray that could block or change the flow of air. Once the setup is complete, you run the simulation. The software calculates how hot air moves through the oven, how it rises, circulates, and cools. It shows how the heat transfers from the heating element to the air and then from the air to the food. It also identifies areas where air moves slowly or forms swirls, which can lead to uneven cooking. The results are visual and intuitive. You might see a colour map of the oven interior, with red areas showing where it’s hottest and blue areas where it's cooler. You could also view arrows that represent air movement, helping you understand whether the hot air is reaching all corners of the oven or if there are dead zones where it stagnates. By using CFD in this way, you can spot problems in your oven design early. You might find that moving the fan or reshaping part of the interior leads to better air circulation. Ultimately, CFD helps you design ovens that cook food more evenly, heat up faster, and use energy more efficiently. In short, CFD lets us explore the invisible, fix problems before they arise, and build with greater confidence. It’s a behind-the-scenes hero in the world of science and engineering, quietly helping to shape a safer, cleaner, and more efficient future. Previous Next

  • Harnessing the Power of Optimisation | Funis Consulting

    < Back Harnessing the Power of Optimisation 12 Mar 2025 We have all been there, seeing a process and thinking, there must be a better way to do this, even achieving a better, more accurate output. It could be a software flow, a manual process or even an entire system, optimisation helps businesses in finding and implementing improvements resulting in a huge impact to the business. Certain processes can be far too complicated when they do not need to be. This means that resources’ time is wasted leading to sub-optimal productivity within a Company. The more complicated processes are, the higher the risk of human errors and setbacks, thus holding Companies back from moving projects and innovation forward, and focusing on what really matters. Every system has its own pace, but when inefficiencies start to negatively affect a Company, it is a good idea to pause for a moment and take a closer look at the different components and tools in place and see where optimisation can make a real change to your business. Process Optimisation can truly help business make that transformation, enabling teams to focus and spend their time and energy on what’s important. Optimisation can bring about a number of benefits to companies and can be used across all sectors, be it public policy, governmental planning, pharmaceutical, biotechnology, transportation, mobility services, manufacturing and operations, FMCG, supply chain and logistics, healthcare, medical applications and finance, just to name a few. To understand Optimisation one has to first understand Predictive Modelling. In Predictive Modelling, as long we know the input x , the relationship between x and y (or f(x)), we are able to predict the output, y . You might be familiar with the below example from your school days, which illustrates the equation of a straight line; y=mx+c , where m is the gradient (or slope) and c is the intercept. In process optimisation m and c could be your process settings. Here, by knowing x and f(x) ,you are able to predict the output, y . Graph showing correlation between x and y So, taking the example above, Optimisation comes in when you need to know m and c , by knowing your input ( x) and what you want to get out ( y) . Therefore, starting from the desired output ( y ), a known variable, we need to understand the relationship between x and y , i.e. f(x) , which are unknowns. We do this by utilising the data that is known by us. Therefore, Optimisation is when you find out what variables you need to deploy and in what manner, in order to get to the desired result or output. Optimisation works by attempting various iterations or value changes in the unknowns (in this case, m and c, our process settings), and varying these until we reach what is called a zero loss (0 Loss) and hence achieve the desired output, y. In this way, we are discovering the parameters needed to get to the desired y. Optimisation can be single-objective or it can be multi-objective, with the latter having more complexity which might make obtaining a 0 Loss very difficult. In such cases, one finds what is called the global minimum, which essentially is the closest possible to a 0 Loss scenario. In Optimisation, a specialised algorithm is used to run the simulations, according to a set of chosen rules and weights attributed to the different rules. Let’s take for instance a multi-objective process Optimisation in a manufacturing setting. Imagine a number of different ingredients which need to be combined together, each bearing different pricing, processing time, and various constraints. A specialised algorithm helps in determining the variables and how these are to be deployed in order to get to the desired product / output. So, this means the best possible product, manufactured within a certain time, cost and of a certain quality. With a Random Sampling technique, when working on such large number of variables and permutations, the higher the number of samples or iteration runs, the closer you get to a 0 Loss, and therefore the more accurate the output. This however leaves the probability of finding the global minimum up to chance. With a Bayesian Optimisation technique we can reach the global minimum in a much more focused manner, taking many less iterations to do so, especially in a multi-variate scenario, making it a more preferred method for Optimisation. Previous Next

  • The Importance of Choosing the Right Visualisation for your Data and your Audience. | Funis Consulting

    < Back The Importance of Choosing the Right Visualisation for your Data and your Audience. 24 Sept 2025 Data visualisation isn’t about creating pretty pictures but it’s about making data meaningful. The right choice of visual can reveal patterns, trends, and insights, while the wrong one risks confusion or misinterpretation. By tailoring visualisations to both the dataset and the audience, and designing with inclusivity in mind, we turn numbers into clarity that drives better decisions. In times driven by data and analytics, especially when it comes to major decision-making, the correct interpretation of the data is important, which can be greatly influenced by how we present it. The best tool, no matter how good it is, is irrelevant if it cannot be used. Similarly, a dataset, no matter how good or detailed it is, loses its value if it cannot be understood by the people who need to use it. Therefore solutions need to be tailored to the dataset in question, as well as to the audience or users who will make use of the solution in their day to day work. The right visualisation can reveal trends, habits, relationships and outliers which would otherwise remain hidden in data which is not visualised in the right manner. The wrong choice of visualisation can confuse, mislead, misdirect or alienate the very people that need to understand the data. So, think about it. Not only does the wrong choice of visualisation make your dataset incomprehensible but it can actually allow for misinterpretation, something that you would not want! Let's talk about the audience for a moment because the audience in visualisation matters a lot. Datasets normally have multiple stories to tell, so the role of visualisation is to make those stories as clear as possible to the intended audience, which could range from analytics experts, who might prefer complex plots such as box plots, heatmaps or network graphs to capture complex patterns and nuances, to non-technical stakeholders such as managers, policymakers and consumers who might benefit from simple, fast-to-read, more intuitive visuals such as bar charts or line graphs. Inclusivity is very important because a visualisation tool should not assume that every user has the same level of statistical or technical literacy. For instance, a red and blue heatmap is great for an analytical expert unless he/she is colourblind, in which case the heatmap could be done in greyscale (black & white) so that it can be easily read by people who are colourblind. Clear labelling, accessible colour schemes and interactive features can ensure that people from different backgrounds are able to draw meaning from the same data. The dataset type most often will dictate the most suitable visualisation approach. For example, for categorical data such as product types, demographics and survey response the best visualisations are bar charts and column charts because these highlight proportions and comparison between discrete categories. On the other hand, for time series data such as sales figures over months, stock prices and sensor readings the best visualisations are line charts and area charts as these show trends, patterns and seasonality over time. For geospatial data such as customer locations, climate zones and logistic routes, the best visualisations are maps, choropleth maps and bubble maps as these add a spatial dimension making it easy to spot regional variations or clusters. For hierarchical maps, such as company structure and product families the best visualisations are treemaps and sunburst charts as these capture relationships and proportions with layers. For relational data such as social networks, process connections and supply chain the best visualisation are normally network graphs and sankey diagrams as they show interactions, dependencies and flows. Distributions such as customer ages or processing times are best visualised through histograms, box plots and violin plots to show variability, central tendencies and outliers. And for multivariate data such as for instance when comparing product performance across multiple metrics, this is best visualised through scatter plots, bubble charts and parallel coordinate plots since they allow the users to explore relationships between multiple variables at once. Accessibility should not be an afterthought. If your data tool is going to be accessed by stakeholders of different technical and/or analytical abilities then it is important that this is kept in mind at all stages when designing the tool. The tool should have clarity and jargon is to be avoided where possible. When colour palettes are involved ensure colour accessibility. Interactivity in the design such as the ability to zoom in or to filter as well as highlighting what matters as well as consistency throughout the various layers/stages of the tool are also important. So, data visualisation is not about placing the numbers neatly and pretty on a graph as a way to decorate a powerpoint presentation during a meeting nor to impress senior management with data overload. It does not work that way. Visualisation is an important choice to make as the plot can make a difference between insight and misunderstanding. By matching the visualisation to your dataset and audience as well as by designing with inclusivity in mind, we create tools that empower people and businesses to make better decisions. Previous Next

  • Sustainable Food Systems through Data Modelling techniques | Funis Consulting

    < Back Sustainable Food Systems through Data Modelling techniques 07 May 2025 Food and Water constitute one of the most basics physiological needs. It is therefore important that these resources, staples of humanity's very existence, are taken appropriate and adequate care of. Science coupled with Technology, can greatly help innovate Food Systems. In a world where climate change is an everyday reality, careful resource management and getting the most out of whatever resources are available is essential. Natural resources to grow food, whether that’s water or land, are precious and need to be managed effectively. Feeding the world’s growing population is requiring more land, and more water to irrigate the crops. In a heating world, this is becoming ever more challenging. Moreover, once the food is grown it needs to be in the right place at the right time and in the right quantities. Too much food goes to waste because of over production at any given time, or simply because it can’t be delivered in the right condition. Managing the resources to grow food, and managing which and how much food to grow are two very different challenges. However, there is a common thread between them, which is to be smart about how we go about these. Starting with actually growing the crops themselves, many times too much water is used due to indiscriminate irrigation, without taking into consideration other factors. Different plants require different amounts of water to grow at their best. Watering plants continuously (using drip irrigation) has been shown to help with plant growth, and is much more effective than watering in large amounts during a short period of time. However, watering plants on the soil surface leads to a lot of water evaporation before the water can trickle down to the roots where it’s then absorbed by the plant. Moreover, there are lots of other factors at play here, notably rainfall (or lack of it), sunlight intensity, air temperature and wind speed. All of these will affect how fast a plant will grow, how much water and nutrients it needs, its water transpiration rate and so on. By implementing systems to measure, and process, all of this real-time data, one can introduce an automated system for irrigating plants. This could control not only the quantity of water sent to irrigate the plants, but also the main nutrients needed (usually Nitrogen, Phosphorus and Potassium) as well as the micronutrients. This could be done via a continuous closed feedback loop, which is to measure the soil conditions in real time, and adjust accordingly. More advanced systems could include imaging the crops with drones, looking at leaf coverage and leaf health, and again adjusting accordingly. However, this can only be done if there is data available to know what the ideal conditions are, and then couple that with predictive and optimisation models. Such automated systems, using these optimisation models, have the power to reduce water use by careful water use, and land use by growing crops in the most efficient manner. But growing crops effectively is only half the picture. If we grow food that then goes to waste because there’s too much of it, or it can’t be delivered to the right place on time, then the sustainable use of water and land would have been absolutely useless. Good demand forecasting, and supply chain management, is absolutely key here. Predicting how much produce will be required in 6 to 12 months’ time will never be 100% accurate, but it can get pretty close if a robust and validated data model is built. The vagaries of weather (for example, different weather to that expected might give rise to demand for different foods) and new consumer trends are hard to account for, but in most cases seasonal demand for different crops is pretty repetitive. Throw in the fact that different regions of the world are growing at different rates, and different regions might grow and/or consume different crops, and this makes for a very interesting predictive model. Such models would help not only individual farmers to know what to sow when, but would also help governments and regional institutions with agricultural policies. Collect data, but make sure it's data that can be used to build such models. If in doubt what data to collect, speak to an expert who will help you devise a data collection plan. With good data come good models. Previous Next

  • Process Modelling & Simulation: Calibrated Dynamic and Steady-State system infrastructures (Part 2 of 3) | Funis Consulting

    < Back Process Modelling & Simulation: Calibrated Dynamic and Steady-State system infrastructures (Part 2 of 3) 23 Apr 2025 Modelling can be done on a system which is constantly in a dynamic state or on a system which is in a steady-state. Transient behaviours can be embedded in a dynamic model, whereas steady-state models are used to simulate a system which is expected to behave in a much more stable manner. Which to use depends on the question/problem you are trying to resolve. Process Modelling & Simulation can be carried out on a single process or on a combination of different processes by combining these together to have a holistic understanding of your system. You can use Process Modelling to model a dynamic system (a system changing over time) or a steady-state system (a system working when all the processes have been coupled together and equilibrated). In dynamic system process modelling, the system is constantly changing and therefore the variables are never constant, sometimes changing drastically and at a high frequency. This means that dynamic systems are influenced by variability, and in the context of a new manufacturing line this could mean that you are modelling a process which is constantly changing. An example of this is when you have a manufacturing line with frequent product switches. Another example of such a dynamic process is when you want to understand the impact of transient behaviours such as when product or resources changeover is carried out, or what happens during peak times. In this case Discrete Event Simulation (DES) is the modelling type which is mostly used. In the example of the manufacturing line you are essentially modelling the flow of products manufactured (you can model from raw material state all the way to a finished good), but also factoring in elements such as the people, the behaviours, the resources, the constraints, and then simulate multiple what-if scenarios. So, you are essentially modelling a real-life situation, or in this case a manufacturing line, but in a digitalised format. You can “play” around or test multiple scenarios, in a safe digital space, until you are ready to implement in real life, once the optimum settings have been found. A steady-state system, on the other hand, is modelling a system which is already calibrated and everything is running in a stable state. So, for instance in manufacturing this would be a system where it is running at a constant rate, such as when you are focused on chemical or thermal processes. There are no changes being made to the system, and thus there are no changes to the system’s output. Imagine if we were to run multiple tests of chemical reactions taking place in a chamber (therefore, without any interference to the process). What we will model is a system in a digital environment which will have no transient behavioural elements, so once all of the coupled systems have converged we will know how the system will perform. In this case the modelling techniques used may vary. So, dynamic models factor in changes, including human-interaction and behaviour, as well as constant or frequent changes to the process. On the other hand, steady-state is when there are no changes made to a process, thus the process should reach a stable operation. In certain industries dynamic models are more used in discrete manufacturing where there is an element of resource usage, frequent changes over time, human-interaction element, coordination between automation and manual processes or settings/environmental changes. Dynamic models are more operational. Steady-state models, on the other hand are not into the operational aspect, but rather how a system behaves when the variables are not changing. In a model, whether that is dynamic or a steady-state, you can add as many variables as you need. Some simple examples of these are costs, throughputs, chemical reactions, mixing and even random events (for a dynamic system) and many more, depending on the model you are building and the problem/question you are trying to answer. These variables do not have to be modelled in isolation, but all of these variables can be coupled and modelled together all at once in one larger model. This gives you a holistic picture of the system, how it works when calibration takes place, in either a dynamic system or a steady-state system. Previous Next

  • Good data means good models. Bad data means misleading models, predictions and decisions. | Funis Consulting

    < Back Good data means good models. Bad data means misleading models, predictions and decisions. 03 Sept 2025 A model is as good as the data behind it. In the following article using the same function, six different subsets from the same dataset were tested. Each of the subsets was fitted with quadratic, cubic and quartic curves. At first the curves looked like a great fit but looking in closely to the predictions the story changes. So with this we learn that even a perfect fit can be misleading - without enough well-distributed data points, you risk choosing the wrong model, making the wrong predictions and taking the wrong decisions. In modelling work the importance of a robust dataset cannot be overstated or highlighted enough. Remember, as they say, garbage in garbage out - if you work with weak or bad data it skews predictions and derails decisions. If the data is flawed needless to say that the predictions will be too and so will any decisions that follow. A robust dataset not only requires a sufficient number of datapoints but also an even distribution across the full range of interest. The example in the image above illustrates a toy case of y=x^2 * noise. Here the noise is included deliberately, as real word data collection is never perfect and inevitably contains a degree of variability. As you can see in the image, the same dataset was processed in 6 different ways as follows; top left = few datapoints, lower range top right = few datapoints, upper range middle left = sparse datapoints, spread across whole range middle right = few datapoints, mid range bottom left = few datapoints, concentrated at the ends of the range bottom right = full dataset Next a quadratic, cubic and quartic curve were fitted to the different datasets and the results are very different! Let's take the one on the top left where the errors (measured as the sum of the absolute difference between actual y and predicted y) were almost negligible, therefore all 3 curves pass (near) exactly through the datapoints. Having said that, the real story appears at the upper end of x. The predicted values start to diverge sharply and not just from each other but also from the predictions made using the complete dataset (bottom right). It is a clear reminder that even when a model fits perfectly to the data at hand, it may still tell a very different story outside the range you've measured. When the full dataset is used, all three fitted curves end up looking almost identical, but with the reduced datasets the picture changes dramatically as the quality of the predictions vary ranging from completely off-base shape with wildly inaccurate predictions (top right) to curves that look reasonable but still not very good predictions as they drift noticeably from the results of the full dataset (middle left). What's striking is that many of these curves appear to fit the data well.The problem is that without enough points spread across the full range, there is no reliable way to tell which fit is actually correct. That uncertainty can easily lead to the wrong model and wrong predictions or conclusions. That is why having a solid dataset isn't just useful, it's essential. Previous Next

  • Fat Bloom in Chocolate | Funis Consulting

    < Back Fat Bloom in Chocolate 26 Mar 2025 Food needs to satisfy the five senses. Enjoying food means that not only should it taste good, but it should also look good, and the texture feels right in your mouth. Even if you have the most delicious food product, if it doesn’t look, taste or smell how it should, then it’s probably not going to be very successful with your consumers. This reminds me of fat blooming in chocolate. Not an uncommon phenomenon, fat blooming does not make a bar of chocolate look tasty, and actually tends to put people off. The good news is that one can implement some changes that bring about a remarkable difference to the chances of fat bloom developing in chocolate products. Thousands of years ago, an ancient civilisation, in what is now known as Ecuador, was the first to recognise and revere the cocoa tree as a sacred source of food. Chocolate comes from cocoa beans - which actually don’t taste anything like the chocolate we know and have a bland taste in their raw form. It is only when the cocoa beans go through the process of fermentation and roasting that the familiar cocoa flavour develops. Chocolate is used worldwide in many shapes and forms, in both sweet and savoury dishes, and the husk of the cocoa tree is also used to make tea, which is said to replenish with energy and boosts the mind. When it comes to a chocolate bar, the mouthfeel, as well as how it looks, sounds when you break it, and smells, are of utmost importance. A good chocolate should “snap” when you break it, and the chocolate should be shiny with a rich deep colour. Chocolate never ceases to amaze and indulge the world over, being probably the most loved and widely available confectionery around. So, it is a most disappointing experience to open a chocolate bar to find it has fuzzy-white layer or spots. This phenomenon is called “blooming” or “fat bloom”. To start off, fat bloom in chocolate is definitely not mould, and it’s not a health hazard, so a chocolate with fat bloom is still good to eat - having said that it is definitely not something that you look forward to when unwrapping a nice bar of chocolate. So why does fat bloom occur? Fat bloom in chocolate is caused by uncontrollable crystallisation of the fat in the chocolate. Whilst crystallisation of fat is a natural occurrence, when then is no control over the number, size and orientation of the crystals, fat bloom is observed. In short, the physical crystalline phase of the fat molecules is not what gives you a snappy, shiny chocolate bar. One thing to note is this is not a chemical phenomenon, that is none of molecules in the chocolate are broken down and there is no chemical reaction taking place. A fun fact is that if you were to take the bloomed chocolate, melt it and temper it again, thus controlling crystallisation of the fat, to form a chocolate bar, it will become shiny and snappy once again. So, what can be done to resolve the issue of fat blooming in chocolate, you might ask. The good news is that something can be done to greatly reduce the chances of fat bloom in chocolate occurring. First and foremost, chocolate manufacturers should understand the root cause of fat bloom as in most cases, fat blooming issues can be resolved via formulation and/or process, depending on the case. When tackling formulation issues, one needs to understand the client’s needs as well as the current interaction of the formulation at a chemical and physical level, to see whether any changes in formulation are needed. Many of you might think that changing the formulation of the product will definitely change the taste of the product. Whilst in some cases this might be true, this is not necessarily the case if the right changes are implemented. Some alternative formulations are extremely close in terms of flavour and texture, and so there would be a minimal to no impact to the end product. Process is also another important factor to look at when tackling the issue of fat blooming. Changing or tweaking the manufacturing process can make a huge difference. In most cases this would not necessarily mean needing additional equipment or adding extra manufacturing costs. Sometimes it’s the small tweaks that make a big difference. To determine the root cause the entire process needs to be kept in mind; from manufacturing processes to storage, all can affect the chance of fat bloom developing in chocolate. Other than fat bloom, chocolate can experience sugar bloom, which is when the chocolate is in a high humidity environment - this however happens when the chocolate is packed at high humidity levels, so it is much more easily controlled. What happen in such circumstances is that if you were to pack a product in high humidity levels, the humidity gets trapped inside the packaging, which then, due to temperature fluctuations in the supply chain, would condense inside the packaging, with the water dissolving the sugar, to then form sugar crystals once the temperature increases again and the water re-evaporates. This type of bloom is a rare phenomenon. So, if you open a chocolate bar with a white fuzzy layer, it’s highly likely to be fat bloom. It is not a health hazard so eat to your heart’s content. Having said that it would be nice if your chocolate bar is shiny and snappy every time you unwrap it. Previous Next

  • Taming the Giants: Large-Scale Modelling and how can Surrogate Models be the right move | Funis Consulting

    < Back Taming the Giants: Large-Scale Modelling and how can Surrogate Models be the right move 09 Jul 2025 Large-scale models can take ages to run, slowing down decision-makers and frustrating the users. Surrogate models offer a solution to this challenge. Surrogate models are simplified, faster alternatives trained on input-output data from the original model. While they aren’t physics-based, they can mimic complex models closely and deliver results far more quickly. Large-scale modelling is developing and using computational models to simulate systems which are very complex in nature. It’s all about managing complexity, as you have lots of variables and many scenarios with often very time-consuming computations. These systems would normally require the processing of large amounts of data (or variables) within wide ranges to represent real-world systems at significantly large scales, both temporally or spatially and so they require substantial computational power or time to solve. There are two types of large-scale models. The first type involves machine learning or statistical models trained on vast datasets - think on the lines of predictive models trained on millions of datapoints or high-dimensional data. Such models are used in many fields, ranging from finance, marketing, and bioinformatics. The second type is complex mechanistic or first-principles (physics-based models) which are based on physical or chemical laws and rules, and are often formulated as systems of differential equations and these can be used in engineering, environmental modelling, climate science, fluid dynamics or food process simulations. Let's take Climatic Modelling for instance, these simulate the Earth's atmosphere, oceans, land surfaces and ice and such models use fundamental laws of physics to predict how climate variables like temperature, rainfall or wind patterns change over time. Since these must cover the entire globe over decades or even centuries, they require huge computational resources. Another example would be Computational Fluid Dynamics (CFD) in Food Processing, designing process such as spray drying or extrusion in food manufacturing. CFD models are used to simulate how fluids, such as air, steam or liquids, move and transfer heat or mass. These models are based on the Navier-Stokes equations and require fine-grained spatial and temporal resolution to capture key details. Running a single scenario can take hours or days especially if the geometry and chemistry is complex or the material properties are complex and vary with conditions such as temperature or pressure. So, if you've ever worked with large-scale modelling, whether that's handling vast datasets or complex, physics-based models, you’ll know that solving or training these models can take anywhere from a few minutes to several weeks, if not more. This time lag can be frustrating, especially for end users who may not fully understand why the results take so long. Often, this becomes a barrier to adoption. However, the good news, is that there is possibly a way around this! Surrogate models, are simplified mathematical versions of your original model, constructed using the outcomes of simulations from that full-scale model. By running the original model under a variety of starting conditions or inputs, you collect a range of outputs. Provided the underlying model is robust, these input-output pairs can be used to train a new, much faster model that mimics the behaviour of the original. While this surrogate model won't be rooted in physical laws, it will be built on sound data generated from a model that is. That being said, two critical questions arise, the first being how many original simulations you need to execute, and will it take so long that building the surrogate model becomes no longer practical or feasible? The answer to this depends on several factors, mainly the complexity of your model and the breadth of the input space that you want to explore. If you're dealing with many variables across wide ranges, the effort required might be substantial. Still, it could be worthwhile. Surrogate models can offer results orders of magnitude faster than the full models. Building one isn’t always straightforward, but if it makes your work more accessible and widely used, it might just be the right move. Previous Next

bottom of page