top of page

46 results found with an empty search

  • Sustainable Food Systems through Data Modelling techniques | Funis Consulting

    < Back Sustainable Food Systems through Data Modelling techniques 07 May 2025 Food and Water constitute one of the most basics physiological needs. It is therefore important that these resources, staples of humanity's very existence, are taken appropriate and adequate care of. Science coupled with Technology, can greatly help innovate Food Systems. In a world where climate change is an everyday reality, careful resource management and getting the most out of whatever resources are available is essential. Natural resources to grow food, whether that’s water or land, are precious and need to be managed effectively. Feeding the world’s growing population is requiring more land, and more water to irrigate the crops. In a heating world, this is becoming ever more challenging. Moreover, once the food is grown it needs to be in the right place at the right time and in the right quantities. Too much food goes to waste because of over production at any given time, or simply because it can’t be delivered in the right condition. Managing the resources to grow food, and managing which and how much food to grow are two very different challenges. However, there is a common thread between them, which is to be smart about how we go about these. Starting with actually growing the crops themselves, many times too much water is used due to indiscriminate irrigation, without taking into consideration other factors. Different plants require different amounts of water to grow at their best. Watering plants continuously (using drip irrigation) has been shown to help with plant growth, and is much more effective than watering in large amounts during a short period of time. However, watering plants on the soil surface leads to a lot of water evaporation before the water can trickle down to the roots where it’s then absorbed by the plant. Moreover, there are lots of other factors at play here, notably rainfall (or lack of it), sunlight intensity, air temperature and wind speed. All of these will affect how fast a plant will grow, how much water and nutrients it needs, its water transpiration rate and so on. By implementing systems to measure, and process, all of this real-time data, one can introduce an automated system for irrigating plants. This could control not only the quantity of water sent to irrigate the plants, but also the main nutrients needed (usually Nitrogen, Phosphorus and Potassium) as well as the micronutrients. This could be done via a continuous closed feedback loop, which is to measure the soil conditions in real time, and adjust accordingly. More advanced systems could include imaging the crops with drones, looking at leaf coverage and leaf health, and again adjusting accordingly. However, this can only be done if there is data available to know what the ideal conditions are, and then couple that with predictive and optimisation models. Such automated systems, using these optimisation models, have the power to reduce water use by careful water use, and land use by growing crops in the most efficient manner. But growing crops effectively is only half the picture. If we grow food that then goes to waste because there’s too much of it, or it can’t be delivered to the right place on time, then the sustainable use of water and land would have been absolutely useless. Good demand forecasting, and supply chain management, is absolutely key here. Predicting how much produce will be required in 6 to 12 months’ time will never be 100% accurate, but it can get pretty close if a robust and validated data model is built. The vagaries of weather (for example, different weather to that expected might give rise to demand for different foods) and new consumer trends are hard to account for, but in most cases seasonal demand for different crops is pretty repetitive. Throw in the fact that different regions of the world are growing at different rates, and different regions might grow and/or consume different crops, and this makes for a very interesting predictive model. Such models would help not only individual farmers to know what to sow when, but would also help governments and regional institutions with agricultural policies. Collect data, but make sure it's data that can be used to build such models. If in doubt what data to collect, speak to an expert who will help you devise a data collection plan. With good data come good models. Previous Next

  • Purchase Decisions and Why Preference Models help with Smarter Product Design | Funis Consulting

    < Back Purchase Decisions and Why Preference Models help with Smarter Product Design 26 Nov 2025 Behind every purchase, there’s a story: needs, habits, emotions, trade-offs. That’s why there’s no such thing as “the customer”, but instead there are many segments with different motivations. Modelling and simulation help uncover these groups and what truly resonates with them. Scenario analysis then lets you test ideas safely: what if trends change? What if costs rise? Data doesn’t replace intuition, it sharpens it and in the end, better decisions come from understanding people, not just numbers. What goes on inside a customer's mind long before a purchase is made is fascinating. Decisions are shaped by many factors, such as price, brand reputation, emotion, convenience and more. First comes the moment of recognition, realising that you need a product. This might be a true need, i.e., a product essential for your survival e.g. food, or a want (not necessary for survival but something that improves the quality of life). This recognition can be triggered either by external stimuli e.g. advertisement or by internal feelings/motivations. Once that need is identified, the person then starts searching for ways to tackle it as well as evaluates alternatives between different options based on things like price, quality, features, value. A purchase decision is then made. After the purchase, the consumer reflects on whether the purchase decision taken met their expectation which often leads to repeat purchases... or vice versa in case of dissatisfaction. Inside our minds, long before the purchase is made, a subtle dance of trade-offs takes place. We weigh different combinations, until the "right" combination is found based on our needs, budget, priorities, urgency and even what we've heard about the product (online reviews for instance can make or break a purchase). The factors that affect the purchase decision are personal to us. Let's take the example of choosing a mobile phone. I might prioritise a better camera because I enjoy taking photos wherever I go. Given my budget and my preference for a specific brand, I narrow the choice between two models. One offers a better camera and better battery life, but it is last year's model. The newer model has a shorter battery life. Because I would rather have the latest version, I am willing to accept the shorter battery life, even if that means carrying a power bank, rather than buy the previous model. Meanwhile, someone else with the same budget and same brand preference might value battery life more than having the newest release and therefore chooses the other option. So decision-making is sort of a black box (or as that phrase from Forrest Gump "like a box of chocolates, you never know what you're gonna get"). However, once you start working with preference-driven models, that box opens and starts "showing colour", and that box of chocolate, is not so random after all. Starting with a simple question when it comes to designing a successful product: "Which features do people TRULY care about?" Be careful not to assume the features they value, but the ones that they actually choose when no one else is watching. Once you start quantifying these patterns a story emerges. Customer Preference Modelling becomes a window into how people weigh different attributes, how they comprise between these and hence how they prioritise. It reveals how one product feels intuitive to one person and completely irrelevant to the other. However, understanding these is not enough. It is what you will do with that knowledge that is important. That is where Product Portfolio Optimisation comes in. It is a balancing act between customer appeal and business performance, a bit like assembling a puzzle in which every piece matters, as well as those very human preferences that underpin them all. With multi-objective optimisation you can actually see where the sweet spots lie and then there are the "what-ifs" that businesses ask, for example: "What if the trend shifts?", "What if material costs increase?", "What if we redesign the product entirely?", "What features can we remove without losing market share?" Scenario simulation becomes a bit of a crystal ball! Not to predict the future but to illuminate how different futures might unfold: a small change in one variable can cause huge ripples. Along the way, segmentation plays a part in it too, it turns out there isn't one customer, there are many! Segments with shared motivations, shared frustrations, shared budgets, shared lifestyles.... and the list goes on. Once you understand these groups you can tailor your product, pricing and communication in a way which feels more meaningful. So yes data, models (that’s whether they’re built on historical data or are dynamic and looking to predict a future state), simulations… they’re the backbone into understanding which features work, what to do if the scenarios/variables change and how to adapt quickly not to lose market share, as well as how to design the product to make it appealing and interesting to your customer base. Because the story is always about people in the end: how they choose, what they value, and how little shifts in design can make a world of difference. Previous Next

  • What bends and what breaks under pressure? And which of the two will happen in a specific circumstance? Finite Element Analysis (or “FEA”) knows best. | Funis Consulting

    < Back What bends and what breaks under pressure? And which of the two will happen in a specific circumstance? Finite Element Analysis (or “FEA”) knows best. 29 May 2025 Whether it is a structure that will hold or crack under various stressors, Finite Element Analysis (FEA) helps to find the answer. When designing buildings and bridges, the integrity of the structure is everything and that is what we aim to find out by using FEA. FEA breaks complex structures into smaller elements to simulate how these behave under stressors, such as load, wind, vibration, earthquakes and so on. It helps to predict weak points and optimise structures or materials. This improves the safety of a structure before it is built. When different forces act on a structure, the outcome might not be obvious. The outcome can’t simply be left to guesswork or high likelihood because failing cannot be an option. Pressure applied on a structure can have a ripple effect across the entire system and through Finite Element Analysis (FEA) we can determine where stress accumulates and in return whether the structure will flex or bend, whether it will fail or break or whether it will hold strong and unaltered, and you don’t want your structure to fail or to break. Having the structure flex or bend, or keeping the structure unaltered, is essential. Both are good, but it depends on the context that you use them in. It is interesting to note that the most resilient of structures are actually the ones the bend. This flexibility allows to absorb the energy, rather than resist it. This flexibility reduces the risk of failure. If you think of a willow tree swaying in the wind, you see that it is adapting to, rather than resisting the wind - it is a case where the structure is flexing, or changing, in order to adapt and not snap. The same concept applies to car bumpers, as they absorb the impact, and change shape with collision, or with cables in a suspension bridge under load, or when the wing of an airplane is vibrating as it is working with the conditions and adapting its form. It is the ability to give a little when under pressure that will allow them to hold a lot. Having said that flexibility is not always the answer. There are some contexts where rigidity is key and here the structures must maintain their exact shape at all the time, no matter the stressors imposed. Some examples of these are support beams in a building, mounting brackets for heavy equipment or surgical tools. Even the minimal movement could lead to dangerous failure and not even the slightest of movement or flexibility is allowed here. The two concepts are useful to increase the hold during stressors leading to higher resilience. However, one can’t thrive in the other’s environment. You can’t use mounting brackets’ material for the wings of airplanes, and on the other hand, you can’t use car bumpers’ material for support beams. This is the beauty of FEA. With Finite Element Analysis, we can simulate how a structure will behave under specific physical conditions before anything is physically built. It works by dividing the design of the structure into small manageable elements, where this modelling technique will help us to predict how a material or a part of it, will respond to stressors, whether these are strains, vibration, heat, pressure and so on. Let’s take earthquake-prone areas for example. In these settings, buildings and infrastructure must be strong enough to withstand dynamic forces while flexible enough to avoid cracking under sudden shifts. If the material is too rigid, then it can snap and break. If the material is too soft, it may collapse. There is a delicate balance which needs to be respected and that is the beauty that FEA can help us find. Using modelling techniques such as FEA supports smarter, safer and more sustainable designs. This is because whether the need is to stay perfectly rigid or flex under pressure - the methodology enables an understanding of the materials and structures long before they exist in the real world, therefore enabling more confidence in the safety of your structures. Previous Next

  • Discrete Element Modelling (DEM): Getting Granular with Simulation | Funis Consulting

    < Back Discrete Element Modelling (DEM): Getting Granular with Simulation 13 Aug 2025 When a material cannot be treated neither as a solid nor as a fluid, such as for instance for powders and granules, Discrete Element Modelling (DEM) comes in. DEM simulates how each individual particle moves, collides, sticks or breaks. In dairy, for instance DEM helps tackle real issues with milk powder such as caking, segregation during mixing and breakage during transportation, just to name a few. Therefore, by modelling every granule DEM helps optimise hopper design, reduce downtime, and improve product quality, before problems hit the production line. Not all materials behave like fluids or rigid solids; there are some which are neither fluids nor solids and so fall somewhere in between. It is exactly there that Discrete Element Modelling (DEM) shines bright! DEM is a particle-based simulation method. In simple terms this means that each particle in the simulation is treated as a distinct object (rather than treating the material as a continuous mass like we do in Fluid Dynamics, for instance). So, in DEM, each particle has its own position, velocity, shape and behaviour and what DEM does is that it tracks these particles when they move, collide, stick, slide or roll over time. In short, DEM calculates contact forces, such as friction, cohesion and restitution between particles and between particles and walls. Think sugar granules falling into a sack. Each granule in this case would be represented individually, and these individual granules' paths, pile shape, their bouncing, sticking or crushing can all be visualised and/or measured. Furthermore, you can change particle size, shape or stickiness and see how the flow changes. So, this particle-level detail makes DEM especially useful when you are studying materials which do not behave neither like fluids nor like solids, when particle interactions dominate the system, such as with powders, grains, tablets, as well as when you want to understand segregation, breakage or jamming which emerge from how particles behave individually and collectively. DEM is used when you want to avoid bridging, arching and flow inconsistencies in hopper and silo design, in mixing and blending when you want to assess segregation risks to improve homogeneity, in tablet coating or compaction, to simulate mechanical stresses and surface contact. It is also used in conveying and transport to optimise equipment and reduce breakage and dusting as well as in additive manufacturing to model powder spreading and deposition. On the flip side DEM can be computationally intensive and calibration is not an easy thing, however it provides insights into problems that are otherwise not so clear and so trial-and-error driven, which wastes time and resources and lacks precision. Let's take milk powder as a practical example. In milk powder production spray drying is followed by bulk powder handling operations, conveying storage, mixing and packaging. Milk powder is cohesive, hygroscopic (absorbs moisture) and often fragile and during post-drying handling, manufacturers might face issues such as segregation due to particle size variation leading to uneven composition, caking and clumping in hoppers or silos, inconsistent flow rates during packaging, excessive dust generation during pneumatic conveying and product degradation from mechanical stress. All of these issues can in fact be addressed using DEM. In dairy, where food safety, consistency, and hygiene are non-negotiable, DEM offers a way to proactively address flow issues and product damage before they reach the production floor or the customer. Not only can dairy producers optimise on hopper design and wall angles to prevent blockages, but they can also reduce downtime due to flow stoppages or cleaning, improve product uniformity and reduce waste as well as inform decisions on process parameters, such as airflow, velocity and drop height in conveying systems. Previous Next

  • The Importance of Choosing the Right Visualisation for your Data and your Audience. | Funis Consulting

    < Back The Importance of Choosing the Right Visualisation for your Data and your Audience. 24 Sept 2025 Data visualisation isn’t about creating pretty pictures but it’s about making data meaningful. The right choice of visual can reveal patterns, trends, and insights, while the wrong one risks confusion or misinterpretation. By tailoring visualisations to both the dataset and the audience, and designing with inclusivity in mind, we turn numbers into clarity that drives better decisions. In times driven by data and analytics, especially when it comes to major decision-making, the correct interpretation of the data is important, which can be greatly influenced by how we present it. The best tool, no matter how good it is, is irrelevant if it cannot be used. Similarly, a dataset, no matter how good or detailed it is, loses its value if it cannot be understood by the people who need to use it. Therefore solutions need to be tailored to the dataset in question, as well as to the audience or users who will make use of the solution in their day to day work. The right visualisation can reveal trends, habits, relationships and outliers which would otherwise remain hidden in data which is not visualised in the right manner. The wrong choice of visualisation can confuse, mislead, misdirect or alienate the very people that need to understand the data. So, think about it. Not only does the wrong choice of visualisation make your dataset incomprehensible but it can actually allow for misinterpretation, something that you would not want! Let's talk about the audience for a moment because the audience in visualisation matters a lot. Datasets normally have multiple stories to tell, so the role of visualisation is to make those stories as clear as possible to the intended audience, which could range from analytics experts, who might prefer complex plots such as box plots, heatmaps or network graphs to capture complex patterns and nuances, to non-technical stakeholders such as managers, policymakers and consumers who might benefit from simple, fast-to-read, more intuitive visuals such as bar charts or line graphs. Inclusivity is very important because a visualisation tool should not assume that every user has the same level of statistical or technical literacy. For instance, a red and blue heatmap is great for an analytical expert unless he/she is colourblind, in which case the heatmap could be done in greyscale (black & white) so that it can be easily read by people who are colourblind. Clear labelling, accessible colour schemes and interactive features can ensure that people from different backgrounds are able to draw meaning from the same data. The dataset type most often will dictate the most suitable visualisation approach. For example, for categorical data such as product types, demographics and survey response the best visualisations are bar charts and column charts because these highlight proportions and comparison between discrete categories. On the other hand, for time series data such as sales figures over months, stock prices and sensor readings the best visualisations are line charts and area charts as these show trends, patterns and seasonality over time. For geospatial data such as customer locations, climate zones and logistic routes, the best visualisations are maps, choropleth maps and bubble maps as these add a spatial dimension making it easy to spot regional variations or clusters. For hierarchical maps, such as company structure and product families the best visualisations are treemaps and sunburst charts as these capture relationships and proportions with layers. For relational data such as social networks, process connections and supply chain the best visualisation are normally network graphs and sankey diagrams as they show interactions, dependencies and flows. Distributions such as customer ages or processing times are best visualised through histograms, box plots and violin plots to show variability, central tendencies and outliers. And for multivariate data such as for instance when comparing product performance across multiple metrics, this is best visualised through scatter plots, bubble charts and parallel coordinate plots since they allow the users to explore relationships between multiple variables at once. Accessibility should not be an afterthought. If your data tool is going to be accessed by stakeholders of different technical and/or analytical abilities then it is important that this is kept in mind at all stages when designing the tool. The tool should have clarity and jargon is to be avoided where possible. When colour palettes are involved ensure colour accessibility. Interactivity in the design such as the ability to zoom in or to filter as well as highlighting what matters as well as consistency throughout the various layers/stages of the tool are also important. So, data visualisation is not about placing the numbers neatly and pretty on a graph as a way to decorate a powerpoint presentation during a meeting nor to impress senior management with data overload. It does not work that way. Visualisation is an important choice to make as the plot can make a difference between insight and misunderstanding. By matching the visualisation to your dataset and audience as well as by designing with inclusivity in mind, we create tools that empower people and businesses to make better decisions. Previous Next

  • When to Hire, When to Hold: Making Smarter Staffing Decisions through Marginal Analysis | Funis Consulting

    < Back When to Hire, When to Hold: Making Smarter Staffing Decisions through Marginal Analysis 21 Nov 2025 It is easy to stay in the abstract when talking about supply and productivity. Yet, at which point does adding more people stop increasing value? It’s something that it is dealt with quite often in the world of modelling and optimisation. At first, additional hands boost efficiency because tasks can be specialised and workflows improved. But eventually, physical space, equipment, technology or processes become the limiting factor and that’s where diminishing marginal returns set in. What is fascinating is how clearly this shows up when you visualise it with marginal product and average product curves. These tell you where efficiency peaks, where it plateaus, and where hiring more actually makes things worse. And this is exactly where modelling, simulation, and data-driven forecasting shine. Before a company commits to new staff, new equipment, or new investment, it can explore “what if?” scenarios safely, spot bottlenecks, test assumptions, and make decisions with far more confidence. Organisations that bring a product or service to the market, whether a small local hair salon, a multinational corporation, or even a government providing public services must make a few key decisions about their offering, such as for instance the price to attribute to the good/service and the quantities to supply If you are a local hairdresser, you likely already have a very good understanding of the going rate for your services, and through basic observation of other salons you can gauge how to operate and how to organise tasks among your staff. However, when we look at the entire economy for a particular good or service, the exercise grows much more complex. So what motivates a company to supply the market with a good/service? There could be different reasons such as gaining market share but profit is a very important driver. Companies make a profit when the price of a good/service is larger than the cost of producing it. When costs are low, and productivity increases, the incentive to expand production grows due to the rising profits. Vice versa, if the costs increase and productivity is low, firms become less motivated to produce at the current market price, instead they might decide to either reduce output or charge higher prices to protect the profit margins. Productivity is pretty much a measure of efficiency, measuring how efficiently inputs are converted into outputs. Better/improved technology and/or working methods tend to increase productivity enabling business to produce even more with the same or even lower amount of resources, therefore decreasing the cost per unit of each output. This increases profitability. Let's take the example of a hair salon deciding on the number of resources to employ (the variable input). With 5 stylists, a total of 115 clients are served in a week, averaging the output to 23 clients per stylist. The output keeps on increasing with every additional new input of labour. This goes on until a new additional input does not result in any additional output. This could be because the salon can take only so much styling chairs due to limitation of space. So businesses expanding their workforce will likely see output growing at first, due to specialisation. If we take the example of the hair salon, instead of 1 employee juggling colouring, cutting, cleaning, dealing with consumers etc., each employee can focus on the area they are best at, thus increasing efficiency. Having said that this has a ceiling since with more additional workers added to limited/fixed workspace/equipment, the gains from adding another resource starts to shrink, until at some point there are zero gains. This is known as diminishing marginal returns. The only way to push past this is to increase the fixed inputs. With our hair salon for instance we could take over additional space allowing more styling chairs. This shifts the production frontier outward. The Marginal Product (MP) curve shows the extra output created by each extra unit of labour. The point where the MP and Average Product (AP) curves intersect i.e. where MP equals AP is where the additional output is exactly equal to the average. At this point, the AP stops increasing and reaches its maximum. Operating at a point where MP = AP is the highest output per worker, and so an indicator of efficiency. MP and AP curves help companies with hiring decisions as it helps identify maximum labour input efficiency, therefore the profit-maximising number of employees. It enables companies to understand when to hire more resources and when to hold. These curves also help companies avoid a stage where MP is 0 or also in the negative. A negative MP is usually caused by too many workers employed with a fixed capital. In our example when too many hairdressers are employed in a confined space with not enough styling chairs, workers start getting in the way of each other leading to a reduction in output. Therefore, this leads to inefficiency and losses. Modelling, simulation, and data science offer powerful tools for understanding and optimising these economic relationships. Using techniques such as discrete event simulation, agent-based models, or machine-learning forecasting, firms can explore how changes in labour, equipment, technology, or pricing affect output long before they commit resources in the real world. These methods help reveal bottlenecks, quantify the impact of productivity improvements, and test “what-if” scenarios such as hiring additional staff or adopting new equipment without disrupting day-to-day operations. For economists, these tools provide richer insights into how supply behaves under different conditions, enabling more accurate predictions of market responses and more informed decision-making for businesses and policymakers alike. Previous Next

  • Harnessing the Power of Optimisation | Funis Consulting

    < Back Harnessing the Power of Optimisation 12 Mar 2025 We have all been there, seeing a process and thinking, there must be a better way to do this, even achieving a better, more accurate output. It could be a software flow, a manual process or even an entire system, optimisation helps businesses in finding and implementing improvements resulting in a huge impact to the business. Certain processes can be far too complicated when they do not need to be. This means that resources’ time is wasted leading to sub-optimal productivity within a Company. The more complicated processes are, the higher the risk of human errors and setbacks, thus holding Companies back from moving projects and innovation forward, and focusing on what really matters. Every system has its own pace, but when inefficiencies start to negatively affect a Company, it is a good idea to pause for a moment and take a closer look at the different components and tools in place and see where optimisation can make a real change to your business. Process Optimisation can truly help business make that transformation, enabling teams to focus and spend their time and energy on what’s important. Optimisation can bring about a number of benefits to companies and can be used across all sectors, be it public policy, governmental planning, pharmaceutical, biotechnology, transportation, mobility services, manufacturing and operations, FMCG, supply chain and logistics, healthcare, medical applications and finance, just to name a few. To understand Optimisation one has to first understand Predictive Modelling. In Predictive Modelling, as long we know the input x , the relationship between x and y (or f(x)), we are able to predict the output, y . You might be familiar with the below example from your school days, which illustrates the equation of a straight line; y=mx+c , where m is the gradient (or slope) and c is the intercept. In process optimisation m and c could be your process settings. Here, by knowing x and f(x) ,you are able to predict the output, y . Graph showing correlation between x and y So, taking the example above, Optimisation comes in when you need to know m and c , by knowing your input ( x) and what you want to get out ( y) . Therefore, starting from the desired output ( y ), a known variable, we need to understand the relationship between x and y , i.e. f(x) , which are unknowns. We do this by utilising the data that is known by us. Therefore, Optimisation is when you find out what variables you need to deploy and in what manner, in order to get to the desired result or output. Optimisation works by attempting various iterations or value changes in the unknowns (in this case, m and c, our process settings), and varying these until we reach what is called a zero loss (0 Loss) and hence achieve the desired output, y. In this way, we are discovering the parameters needed to get to the desired y. Optimisation can be single-objective or it can be multi-objective, with the latter having more complexity which might make obtaining a 0 Loss very difficult. In such cases, one finds what is called the global minimum, which essentially is the closest possible to a 0 Loss scenario. In Optimisation, a specialised algorithm is used to run the simulations, according to a set of chosen rules and weights attributed to the different rules. Let’s take for instance a multi-objective process Optimisation in a manufacturing setting. Imagine a number of different ingredients which need to be combined together, each bearing different pricing, processing time, and various constraints. A specialised algorithm helps in determining the variables and how these are to be deployed in order to get to the desired product / output. So, this means the best possible product, manufactured within a certain time, cost and of a certain quality. With a Random Sampling technique, when working on such large number of variables and permutations, the higher the number of samples or iteration runs, the closer you get to a 0 Loss, and therefore the more accurate the output. This however leaves the probability of finding the global minimum up to chance. With a Bayesian Optimisation technique we can reach the global minimum in a much more focused manner, taking many less iterations to do so, especially in a multi-variate scenario, making it a more preferred method for Optimisation. Previous Next

  • The Science and Value of Finite Element Analysis (FEA) in Food Packaging: Food packaging plays a crucial part in complex supply chains (Part 2 of 2) | Funis Consulting

    < Back The Science and Value of Finite Element Analysis (FEA) in Food Packaging: Food packaging plays a crucial part in complex supply chains (Part 2 of 2) 11 Jun 2025 Behind every sealed lid, there is a world of science and simulation, where temperature shifts, compression forces and impact drops are tested in a virtual setting, before the physical prototype is built. This enables precision and removes the guesswork, in order to bring to your homes food that is safe, fresh and intact enabling high quality products. The future of foods is about smarter design to reduce waste, increase performance and take faster decisions. Last week, in our article "The Science and Value of Finite Element Analysis (FEA) in Food Packaging: Packaging is more than a mere container for your food product. (Part 1 of 2)”, we spoke about how FEA can help companies make better decisions as to which packaging design to go for, when considering various variables. Today we are going to give a more practical example of how food packaging plays a fundamental part during supply chains, to ensure the product arrives safely on our tables at home. Let’s take as an example that we are designing packaging for a chilled ready meal that needs to be transported across a regional supply chain to be sold to supermarkets for the end-user to enjoy. The conditions are that the product is to stay below the 5 °C, remain intact during the various transportation stages, as well as that it has a short shelf-life of 7 days from end of production to consumption. Through Finite Element Analysis or "FEA", we can make use of thermal modelling to understand the behaviour of the meal during the different shifts in temperatures during different transportation, loading and unloading and storage scenarios. It helps us to predict how well the design of the packaging insulates the product across various conditions and temperatures. By creating a virtual model and run simulations, you can compare various materials to understand thermal conductivity and insulation and assess whether additional packaging design features such as layers and vacuum sealing are needed for extra protection. Through FEA’s structural analysis and simulation you understand how the packaging will endure stressors such as stacking in a warehouse by applying virtual compression and impact forces to see whether it survives the pressure, what happens if the product is dropped from a certain height and whether the seal will hold under different pressures. As you see the behaviour, you can tweak and optimise the design such as for example the packaging’s thickness, creating stronger corners, reinforcing the lid but optimising this in such a way to balance out additional protective features and not waste excess material. It is finding the right balance of costs, quality and sustainability, as well as finding the balance of what is the lighter, lowest-cost material to meet the needed requirements. Other considerations are also factored in such as will the packaging fit securely in standard crates for the handling by retailers, how will the design of the packaging fit on a pallet, will its shape and its rigid form allow automated handling in a warehouse and how will it fare under different temperature shifts. By using data-driven and physics-based modelling and simulation early in the development process you can reduce the number of physical prototypes needed, reducing packaging failures and take faster and more informative decisions on the right design of packaging to choose in line with business, technical and sustainability goals. This improves cost, quality and time to make food systems better. Packaging development process becomes a proactive and strategic process, rather than a trial-and-error based exercise, which places a burden on companies and societies. It is way of a smarter exercise to understand how you can deliver to your consumers, in the right manner. Funis Consulting works at the intersection of R&D and Innovation through the use of modelling and simulation techniques. Whether it is understanding new materials, improving robustness or costs-efficiencies of your current design or systems or reducing environmental impact through smart design, the opportunities for meaningful change in foods and food systems are there and they are vast. We believe in a better way to do things, to create real-world impact for a better world. Previous Next

  • Strategic decision-making under resource constraints - moving "beyond the curve" | Funis Consulting

    < Back Strategic decision-making under resource constraints - moving "beyond the curve" 05 Nov 2025 In every economy resources are limited, whether that is land, human, capital.... Modelling is about defining those limits and exploring possibilities. The Production Possibilities Curve (PPC) captures this trade-off: It shows what is feasible. In order to move "beyond the curve", systems need innovation or efficiency gains and simulation helps us test how, before actioning this in the real world. It helps with quantifying opportunity costs and reveals the shape of trade-offs. Every scenario is a choice between competing objectives, and optimisation then becomes a story of priorities and constraints. In the end modelling isn't just about computation, but it is about decision-making made visible. Resources are finite in all economies... from land and labour to capital and technology. So, when a country decides on its strategy, it must also decide how to allocate these limited resources across competing activities. This is therefore an optimisation problem; how can we achieve the greatest possible outcome given the constraints? Let's simplify and pretend that our economy is a two-good economy, where you can either manufacture easels or produce oil. If all the resources were dedicated to the manufacture of easels and no resources were allocated for oil production, the economy would produce a maximum of 200 easels. Vice versa, i.e., if all resources were allocated to the production of oil only, there would be 170 units. These are the trade-offs and can easily be visualised with a Production Possibilities Curve (PPC), shown below. The PPC is a basic and fundamental economic concept and it shows the maximum combination of two goods that can be obtained with limited resources and technology. Along the curve of the PPC, resources are used at full productive efficiency. Producing more of one good can be achieved by producing less of the other, because of the limitation in resources. This is the opportunity cost. In the example below, if I produce an additional 20 units of oil, the opportunity cost is 15 easels less. But the question is what if society wants to become more productive? As we have seen before with the resources at our disposal, it is simply not possible to "go beyond the curve". This is only achieved through economic growth, typically driven by things such as investment in capital goods or technical advancement. So, society decides to forego producing easels in favour of new technology or new equipment, as this decision will, in the future, increase the productive capacity of the economy (therefore "moving beyond the curve"). The decision to do this has what we call distributional implications, in the sense that such decisions (e.g. whose consumption is being reduced now and who will benefit in the future), are not evenly distributed amongst societies. For instance, an artist might prefer to have his easels now, whereas a mother with young children might prefer to forego the easels. Economics is not a simple matter of numbers and models, but there is also ethical, policy considerations as well as the preference of society. To conclude, the Production Possibilities Curve (PPC) mirrors the logic of modelling and simulation itself, where you define your constraints and explore the feasible space, you evaluate trade-offs between competing objectives and use analysis to inform strategic choices. Understanding such economic fundamentals help frame simulations not just as computational exercises but as decision-making tools grounded in the real-world dynamics of scarcity, efficiency and growth. Previous Next

  • Process Modelling & Simulation: Calibrated Dynamic and Steady-State system infrastructures (Part 2 of 3) | Funis Consulting

    < Back Process Modelling & Simulation: Calibrated Dynamic and Steady-State system infrastructures (Part 2 of 3) 23 Apr 2025 Modelling can be done on a system which is constantly in a dynamic state or on a system which is in a steady-state. Transient behaviours can be embedded in a dynamic model, whereas steady-state models are used to simulate a system which is expected to behave in a much more stable manner. Which to use depends on the question/problem you are trying to resolve. Process Modelling & Simulation can be carried out on a single process or on a combination of different processes by combining these together to have a holistic understanding of your system. You can use Process Modelling to model a dynamic system (a system changing over time) or a steady-state system (a system working when all the processes have been coupled together and equilibrated). In dynamic system process modelling, the system is constantly changing and therefore the variables are never constant, sometimes changing drastically and at a high frequency. This means that dynamic systems are influenced by variability, and in the context of a new manufacturing line this could mean that you are modelling a process which is constantly changing. An example of this is when you have a manufacturing line with frequent product switches. Another example of such a dynamic process is when you want to understand the impact of transient behaviours such as when product or resources changeover is carried out, or what happens during peak times. In this case Discrete Event Simulation (DES) is the modelling type which is mostly used. In the example of the manufacturing line you are essentially modelling the flow of products manufactured (you can model from raw material state all the way to a finished good), but also factoring in elements such as the people, the behaviours, the resources, the constraints, and then simulate multiple what-if scenarios. So, you are essentially modelling a real-life situation, or in this case a manufacturing line, but in a digitalised format. You can “play” around or test multiple scenarios, in a safe digital space, until you are ready to implement in real life, once the optimum settings have been found. A steady-state system, on the other hand, is modelling a system which is already calibrated and everything is running in a stable state. So, for instance in manufacturing this would be a system where it is running at a constant rate, such as when you are focused on chemical or thermal processes. There are no changes being made to the system, and thus there are no changes to the system’s output. Imagine if we were to run multiple tests of chemical reactions taking place in a chamber (therefore, without any interference to the process). What we will model is a system in a digital environment which will have no transient behavioural elements, so once all of the coupled systems have converged we will know how the system will perform. In this case the modelling techniques used may vary. So, dynamic models factor in changes, including human-interaction and behaviour, as well as constant or frequent changes to the process. On the other hand, steady-state is when there are no changes made to a process, thus the process should reach a stable operation. In certain industries dynamic models are more used in discrete manufacturing where there is an element of resource usage, frequent changes over time, human-interaction element, coordination between automation and manual processes or settings/environmental changes. Dynamic models are more operational. Steady-state models, on the other hand are not into the operational aspect, but rather how a system behaves when the variables are not changing. In a model, whether that is dynamic or a steady-state, you can add as many variables as you need. Some simple examples of these are costs, throughputs, chemical reactions, mixing and even random events (for a dynamic system) and many more, depending on the model you are building and the problem/question you are trying to answer. These variables do not have to be modelled in isolation, but all of these variables can be coupled and modelled together all at once in one larger model. This gives you a holistic picture of the system, how it works when calibration takes place, in either a dynamic system or a steady-state system. Previous Next

  • Training and Testing your Model | Funis Consulting

    < Back Training and Testing your Model 01 Oct 2025 Building a data-driven model is a powerful tool to detect patterns and make predictions of future trends and behaviours. That is why modelling and simulation is increasingly being used in multiple industries to anticipate future outcomes and changes, and therefore to anticipate what their next move should be. In industries where you need high accuracy insights or in those where multiple variables affect a product, then building a model is essential. Modelling and simulation are powerful tools that can transform raw data into actionable insight. They enable more rapid and accurate diagnosis, reveal behavioural trends, and even support reliable predictions of future outcomes. These techniques are applied across industries wherever understanding and anticipating change creates value. So, what does it mean to build a model? A data-driven model is about teaching a system to recognise patterns from historic data so that it can predict future trends and behaviours. To train a model the data is typically split into two sets with a ratio of 70:30. 70% of this data is the training data which is used to teach the model the relationship between inputs and outputs. For example, in medical imaging, a model could be trained so that if a particular feature or colour is detected in an MRI scan, then a corresponding diagnosis is made. Before training the model, the data needs to undergo verification and preparation which also includes cleansing to remove errors or inconsistencies because remember garbage in, garbage out. It could also entail normalisation (e.g., converting to the same metric) to be able to compare like with like. You will also need to ensure the data is in the right format as otherwise it could provide misleading results. These steps are therefore essential so that the model is trained or being "taught" properly, meaning that it is relying on information which is correct. Even the most advanced models can produce misleading results if trained on flawed information. So, once the training stage is complete, there is the testing stage where you test the remaining 30% of your dataset. The purpose of this is to see and evaluate how well the model is performing on data (that 30% which was not used for the model training) that it has not seen before. When a model is tested, the predicted outputs of the model are compared against the actual values and here, a good model should give predictions that are very close to the actual values. Popular measures of performance are R² and RMSE. R², or coefficient of determination, is a range between 0 to 1 and values that are close to 1 indicate that predictions are aligning with actual outcomes. Another measure of performance is RMSE (Root Mean Square Error). This measures the average error between the predicted and actual values, with values closer to 0 indicating higher accuracy. With categorical variables, predictions are inherently limited to the categories present in the training dataset and so you cannot predict outcomes outside of the observed categories/design space of the dataset in a reliable manner. In other words, the model can only interpolate between categories it has already seen, or the model can work based only on the category of data used for training. On the other hand, with continuous variables, if you assign numerical values to represent a range, for example, mapping different colours to a numeric scale (like for the MRI example above) the model can predict values outside the original training range. This allows extrapolation beyond the observed data, though the accuracy of such predictions should be treated cautiously because it is risky. With model training you are essentially identifying patterns and trends in data. Patterns make sense if you understand why they occur and most of the times this requires domain knowledge and insights. Once you have identified historical behaviour you can extend it into the future with predictive techniques ranging from statistical tools to advanced machine learning models, the latter especially useful for detecting subtle, non-linear relationships in large datasets. Having said this, predicting future outcomes is always risky, and any predictions should be accompanied by a confidence margin. So, building a model which predicts trends, results and behaviours is there to help others make informed and smarter decisions. The value lies not only in recognising the trend but also in knowing how to respond and your next steps forward. Previous Next

  • Modelling Chemical Complexity: Coupling Reaction Kinetics with Computational Fluid Dynamics (Part 2 of 2) | Funis Consulting

    < Back Modelling Chemical Complexity: Coupling Reaction Kinetics with Computational Fluid Dynamics (Part 2 of 2) 30 Jul 2025 What really happens when a reaction unfolds in a real system? Not just chemistry but fluid flow, heat transfer, and complex geometries all influence the outcome. Whether it’s a stirred tank, a baking oven, or even a human body, chemical reactions don’t occur in isolation. That’s where Computational Fluid Dynamics (CFD) makes a difference, especially when paired with Reaction Kinetics. By combining these two, we can simulate not just what reactions happen, but how, where, and under which conditions, in both time and space. From predicting mixing behaviour and temperature gradients to understanding reaction hotspots and residence times, CFD + Reaction Kinetics creates a powerful modelling framework. This isn’t just theoretical, it’s being used to optimise processes in food science, pharma, chemical engineering, energy, and biosystems. When reaction occur in real-world systems, whether that is a stirred tank, a baking oven, mixed ingredients in a production line or even inside a human body, those reactions are rarely taking place in isolation. When a reaction takes place, there is more that is going in such as heat, mass transfer and fluid motion, all playing vital roles in how that reaction unfolds and vice-versa. That is where Computational Fluid Dynamics (CFD) comes into play. CFD is a numerical method that simulates how fluids (liquids and gases) move and interact with their environment. And when CFD is combined with reaction kinetics it enables us to model not just where and how fast a reaction occurs, but also how it evolves in space and time as well as under dynamic physical conditions. CFD and Reaction Kinetics work together to understand various topics. Firstly, with Flow Modelling, CFD solves the Navier-Stokes equations to predict velocity, pressure and turbulence of a fluid in a system, helping define how the reactants are transported or mixed throughout the domain. Through Heat and Mass transfer, CFD tracks temperature distributions (especially important for temperature-sensitive reactions) and how substances diffuse or convert within the fluid. After this you work on the integration of Reaction Kinetics into the CFD model, by embedding rate equations for chemical reactions and these are evaluated at every step and every point in space based on local concentration and temperatures. Once the model is built, you can carry out a coupled simulation. This is because as the system evolves, CFD continuously updates how flow affects reaction rates and vice versa since chemical reactions can release or absorb heat, or alter viscosity and density, feeding into the flow field. The output of this, is a model providing rich spatial and temporal data on concentration or reactants and products, temperature profiles, reaction rates at different locations as well as fluid velocities and mixing behaviour. CFD coupled with Reaction Kinetics can be applied across many fields such as in Food and Sensory Science, in Pharmaceuticals, in Chemical Engineering, in Combustion and Energy as well as in Environmental and Biological systems. CFD coupled with Reactions Kinetics in essence captures complexity and heterogeneity of real systems making it possible to optimise reactor design and process conditions, improve energy efficiency, avoid unwanted by-products/hotspots, control quality and consistency of outputs as well as reduce experimental costs through simulation. CFD coupled with Reaction Kinetics therefore bridges the gap between theory and practice, giving scientists and engineers a virtual lab to explore, design and improve systems with precision. Previous Next

bottom of page