Elements of a new business and technical architecture for SCM software have been emerging over the course of the past five years. This emerging architecture, shown in Figure 4 and summarized in Figure 5, is based on business and technical concepts that are enumerated and described below. The architecture and its various elements offer great promise in addressing the issues previously discussed.
- Convergence: The emerging business and technical architecture for SCM solutions is based on convergence of business processes and time. What does this mean? To draw an analogy, when Steve Jobs introduced the first iPhone in 2007, he started by saying he was introducing three devices: 1) a music player; 2) an Internet-connection device; and 3) a phone. (He could have added a fourth device—the camera.) And, he said, the three are incorporated into a single device based on a single architecture. This is known as convergence, and it immediately disrupted the individual markets for music players, Internet-connection devices, phones, and cameras. Likewise, the business architecture of tomorrow will see increasing process convergence and collapsing of time boundaries between planning and execution.
These concepts are not far-fetched; leading companies such as Procter & Gamble are already collapsing their demand, supply, sales and operations planning (S&OP), and channel management processes into a single process executed by a single team and supported by a single technology. This was first reported by the website Logistics Viewpoints in 2015. The days of having to buy separate software solutions for demand, supply, S&OP, and channel management are numbered.
- Digital twin: Supply chain management software operates by first creating a computer data model of the real world. Logic and algorithms are then run against the model to arrive at answers and decisions. These answers are then operationalized into the real world. The quality of answers or decisions generated by the software depends heavily on the quality of the data model, or how well the model represents reality at the point in time at which the answers are generated. This is true across the decision-making landscape of SCM—manufacturing, distribution, transportation, and warehousing.
In today’s digital world, it has become common to refer to these data models as “digital twins.” In other words, the data model needs to be an identical twin of the real world at all points in time. This can only happen when the model is very robust—that is, it is flexible enough to represent all real-world entities and scenarios—and it can be brought up-to-date instantaneously. This second point is a core tenet of the digital enterprise and a key promise of the Internet of Things. Previously, there was a lag, or latency, between what was going on in the real world and what was represented by the model, such that suboptimal answers were often generated by the software. Because supply chain resources—things and people—can now transmit their status instantaneously, computer models will increasingly be synchronized with the real world, thereby enabling the digital twin.
In-memory computing (IMC) is one of the core enabling technologies behind the digital twin. IMC allows data models to be stored in memory, versus on a physical hard drive. This provides the speed necessary for enabling the digital twin. While IMC has been used for supply chain software for a couple of decades now, recent advances allow it to be scaled to handle much larger problems, including those that require the processing of a large number of digital signals from the Internet of Things.
- Extensible data model: Future SCM software will have general-purpose data models with extensibility across functional domains, meaning the data model can represent manufacturing, distribution, and warehousing, for example. At the dawn of packaged SCM software in the 1990s, pioneers set this as a key objective. For a number of technical and business reasons, this objective was not achieved. Instead, SCM software evolved toward built-for-purpose, proprietary data models. For each new problem space in SCM, a new data model and new set of software was developed. This contributed to the “bingo board” problem described in Figure 2.
Supply chain software requires robust data models that can precisely represent myriad relationships and use cases across diverse environments. Precision has become more important as supply chains increasingly have to deliver products when, where, and how consumers desire them. The combination of the digital twin and an extensible data model provides the means to deliver much more precision when synchronizing operations across, for example, retail, distribution, and manufacturing.
- Artificial intelligence: Future SCM software will be characterized by its ability to “self-learn.” This means it will adapt and make decisions by itself, without human intervention. This will be made possible by artificial intelligence (AI) and, more specifically, machine learning, which is software that can learn from data versus being completely driven by rules configured by humans. AI allows software to take on more of the decision-making load associated with managing supply chains. This is most prominent in fields such as robots and self-driving trucks, where actions are self-directed based on the software’s ability to learn. In SCM software, AI will start by providing suggestions to humans, and then eventually be used to automate decisions.
The state of the art of learning in SCM software today is manual trial-and-error. When new situations arise that the software currently does not cover, the software is either reconfigured or the code itself is modified to accommodate the new situation. Either way is expensive and time consuming. Those responses are also ineffective given that supply chains and supply chain problems are highly dynamic and changing all the time. Machine learning will be critical to providing increasingly sophisticated response algorithms as part of the control engineering loop shown in Figure 6. For example, as part of today’s S&OP process, software provides decision options that can be applied when demand and supply do not match. These options—change a price, run overtime, or increase supply, to name a few examples—are rigidly defined. By contrast, machine learning promises to learn and come up with new response options, including sophisticated multivariate options.
- Streaming architecture: Streaming architecture has emerged in the past five years to help solve problems requiring real-time processing of large amounts of data. This architecture will be increasingly important to SCM software, as solutions need to enable the digital twin in order to support supply chains’ precise synchronization across time and function. There are two major areas in streaming architecture: streaming and stream processing. Streaming is the ability to reliably send large numbers of messages (for example, digital signals from IoT devices), while stream processing is the ability to accept the data, apply logic to it, and derive insights from it. The digital twin discussed earlier is the processing part of the streaming architecture.
- Control engineering logic: Control engineering is an engineering discipline that processes data about an environment and then applies algorithms to drive the behavior of the environment to a desired state. For example, manufacturing continuously processes data about the state of machines, inbound materials, and progress against the order backlog, and then runs algorithms that direct the release of materials to achieve production goals. This is known as a control loop. The key concepts are shown in Figure 6, with a “control” loop executing continuously, and a “learning” loop adjusting the control algorithms based on results from each pass of the control loop.
This concept is embedded in just about all functions related to supply chain management. S&OP, for example, has an objective function: a financial goal; resources and people must be mobilized to achieve that objective function. When there is a deviation (in control engineering this is known as the “error”) between the objective function and what is actually happening, corrective action must be taken. This corrective action could be something like reducing prices, increasing inventory, or working overtime. In the future, these corrective actions will be increasingly aided by artificial intelligence. Whether it’s in supply, manufacturing, distribution, warehousing, or order management, much of the work involved in SCM is focused on reducing the “error” between what the objective is and what is actually happening in the real world.
- Workflow and analytics: Workflow defines the steps a worker carries out to accomplish a particular activity, task, or unit of work. In regard to software, this often means the sequence of screens, clicks, and other interactions a user executes. In most cases today, these steps are rigid and predefined; changing them requires configuration or even software code changes, which could take weeks or months.
Having a common, flexible workflow engine across SCM functional domains is critical to achieving convergence in supply chain software. This provides the ability to support many different use cases and interactions with the software, not just within functional domains but also across domains.
Integrated into each workflow are both logic and analytics. These analytics help predict things like demand, the impact of a promotion, or the precise arrival of an inbound ship, to name just a few possibilities. Analytics are now headed in the direction of prescribing answers to problems. For example, predicting demand is important, but it’s equally important to know what to do when the prediction does not match the plan. This is where prescriptive analytics can help—providing insights into what to do when reality does not match the operational plan. Artificial intelligence can also enable prescriptive analytics by continuously learning from past decisions.
Analytics used to be an offline, after-the-fact activity to determine what had happened. In other words, it had its own workflow that was separate from operational workflows. While this is helpful for looking in the rear view mirror, it is limited in its ability to help with what is currently happening. Analytics that are built in-line to operational workflows provide a dynamic, up-to-the minute view, versus the offline model, which might provide a week-old or even a month-old view.
- Edge computing: Cloud computing, a centralized form of computing often accessed over the Internet, is now being augmented with localized computing, also known as “edge computing.” (The term refers to being “out on the edge” of the cloud, close to where “smart” machinery is located.) Edge computing has rapidly evolved to address issues associated with processing data from the billions of microprocessor-equipped devices that are being connected to the Internet of Things.
The growth of edge computing is necessary for a number of reasons:
- Network latency is a real concern. Latency refers to the turnaround time for sending a message and receiving a response. The turnaround time for sending and receiving information to and from the cloud may take 100-200 milliseconds. With localized, or edge, computing, the turnaround time may be 2-5 milliseconds. In real-time production or warehousing environments, this is a critical requirement. Furthermore, the variability in response times in cloud computing is much higher than with localized computing.
- Machines, inventory, and connected “things” generate millions of digital signals per minute. Sending all of these to the cloud is impractical. Thus, edge computers play a critical role in determining what needs to be sent to the cloud, and what can be filtered or thrown out. For example, a machine might report on its capacity every second. If the reported information has not changed, or changed only within a small band, there may be no need to send it along to the cloud.
- Some machines, inventory, and things operate in environments with no Internet connection or with Internet connections that are unstable. For example, ships at sea may not have Internet connections until they are close to port, while warehouses in emerging markets may have unreliable network connections. In situations like these, an edge computer can be used to process the signals locally; these signals are then forwarded to the cloud when a connection is available.
SCM technical architectures will increasingly be a mix of edge and cloud computing. For example, in Figure 4, the area labeled “Edge Computing” will be local to where the devices are located, and the rest of the diagram will be run in the cloud.
- Domain-specific “apps”: While a certain set of capabilities can be abstracted into a domain-independent infrastructure there is still the need for unique use cases in different domains such as manufacturing, transportation, and warehousing. Often, though, companies encounter new use cases that cannot be handled by existing software. Rather than engage in costly and time-consuming configuration or reprogramming, they increasingly are taking a cue from the consumer world, where an entire “app economy” has been built on common smartphone infrastructures. This type of thinking has started to find its way into enterprise software, so that leading end user companies as well as packaged-software companies are currently migrating their infrastructures to this type of structure.
For example, many companies want to “own” the data associated with their customers because the decisions they make based on that data are increasingly the battleground of competitive differentiation. These companies are creating environments where internal staff and external software providers can develop “value-add” apps that are useful in mining and making decisions against customer data. An example is an inventory-replenishment app that looks at customer data for a given category and augments inventory-deployment decisions with an algorithm that provides new insights based on a unique combination of weather, local events, holidays, and chatter in social media.
- Business strategy orchestration: One of the persistent challenges in supply chain management is how to achieve continual synchronization between function-based operational areas and the overall business goals of the enterprise. This includes synchronization both within and across functional areas. For example, at the enterprise level, the goal could be high growth and low margin, low growth and high margin, or all points in between. Furthermore, these goals may differ by business unit, product line, and even by product and customer. These goals must be translated into operational policies, which then must be configured into SCM software.
As time goes on, SCM software will increasingly have an orchestration layer that creates and maintains alignment of the policies that govern each functional area. This will happen through two key constructs: the strategy dashboard, which maintains business goals and translates them to operational policies; and the control tower, which provides cross-functional visibility for the entire supply chain as well as control mechanisms to steer supply chain decisions. Over the past five years, control towers have captured the imagination of SCM professionals and C-level executives, with many companies attempting to create control room-type environments, even in boardrooms.