The process of machines learning things and mirroring human intelligence to perform complex tasks, such as logical reasoning, automating repetitive tasks, and taking decisions on our behalf, among others, is referred to as artificial intelligence (AI).
Over the years, it has taken a bottoms-up approach through its various subfields, such as machine learning (ML), deep learning, computer vision, and the blend of these, foundation models, to what it is now. To unpack this, it is imperative to understand how artificial intelligence has helped businesses operate in sync with how humans interact with computers.
A Brief History of AI
Going back in time to the 1980s, developers have approached AI explicitly through rules, expert systems, and deterministic programming. These forms are knowledge that are fed into the system, helping businesses to perform operations, such as matching invoices, bookkeeping, sales/purchase order generation, among others. That was a time when computer programs and deterministic systems ruled the business world. This form of AI was referred to as symbolic AI and business owners were quite happy about its delivered productivity and profit. Symbolic AI is still in existence today in the form of semantic web, ontologies, automated planning and scheduling systems.
Fast forwarding to 2024, business owners are performing complex analytics in a snap just through simple questions and answers, while reducing their need for developers or experts. Thanks to the rapid advancement in AI, the already existing symbolic AI ecosystem is complemented by the statistic nature of ML algorithms and foundation models that make use of natural language processing (NLP). This new form of AI, coined “generative AI,” provides business owners with up-to-date, real-time analytics and forecasts, automated business workflows, and the capabilities are expected to grow.
In SAP terms, seamlessly connecting generative AI to the existing functionalities of SAP S/4HANA adds an additional enterprise dimension to AI. This helps data to effortlessly flow between multiple business processes in an enterprise.
Now let’s look at generative AI in more depth.
What is Generative AI?
As mentioned earlier, generative AI is an application of foundation models and large language models (LLMs), a form of foundation model, capable of generating an output—be it text, image, audio, and video. One general exception though is that LLMs are specifically meant for text, while foundation models can function on a wide variety of modes, such as image, audio, and video.
Functions of Generative AI
Foundation models are ML models that use data labeling as an underlying mechanism to provide context to the system in the learning process. This data labeling is inherently present in the system in the structure of data. However, this is different from the three core learning approaches of ML—supervised, unsupervised, and reinforcement—where data labeling is done manually.
These are neural networks trained on a large amount of data using a self-supervised learning algorithm to perform tasks. Most of the following tasks are NLP-based, except for a few:
- Data classification and keyword extraction
- Text summarization
- Converse and respond like a human in Q&A format after searching large volumes of data
- Generate content, such as images, video, music, etc.
- Code generation
Foundation models work on a huge amount of data in the form of tokens to produce a model. The success of a foundation model is defined by the number of parameters that it is trained on—usually in the billions. Mainstream AI tools often have more: OpenAI’s GPT-4 currently stands at 1.76 trillion parameters.
A Dimensional Shift in the AI Space
Though generative AI offers a multitude of opportunities for business growth, business owners are uncertain of its capabilities because of its probabilistic nature. A landmark paper titled “Attention is All You Need” held the attention of companies and business owners as it introduced the new attention-based transformer model that could potentially minimize the probabilistic nature of AI.
Generative AI or foundation models use the attention-based transformer model, a component of neural networks assigning weightage to different parts of input data (business data) to generate the desired output. The resultant model is the OpenAI’s Generative Pre-Trained Transformer (GPT) architecture with a mere 100 million parameters, capable of performing complex computations, and sentiment classification with very little input and guidance. To date, GPT-4 is being scaled to a level where it uses more than 1 trillion parameters, to generate output that is highly reliable and accurate.
This new dimensional shift affirms the following transitions.
- From supervised learning mode to self-supervised mode: The self-supervised learning mode is defined by the foundation model’s ability to scale and label itself constantly with the data that’s being fed.
- Extending from pre-defined capabilities to emerging capabilities: Developers were struggling to develop applications that were future resilient. With generative AI offering developers a platform to develop future-ready applications, the thought of sticking to pre-defined capabilities is swiftly drifting towards embedding emerging technologies into existing applications. This gives applications the much-needed scalability option.
- From developing single-purpose to multi-purpose AI models: ML has already kickstarted the process of automation, but was able to develop only single-purpose ML models. With generative AI on the rise, the expanding nature of foundation models allows developers/AI engineers to develop multi-purpose AI models that seamlessly function multiple tasks simultaneously.
- From classifying AI to generative AI: The transition from deterministic to probabilistic nature of AI happened in parallel with AI/ML models performing data classification to generating content with a multi-modal structure. To put it concisely, ML engineers’ decades of energy and effort into AI has finally paid off.
Generative AI, a Yet-to-be-Proofed Tech
At a very high level, generative AI seems to be the rescuer in solving complexities of any range—especially broader general knowledge problems. However, the following factors impact its credibility, still placing it in a nascent stage.
The first is a concept called hallucination. In this state, generative AI produces plausible and sound, yet false, answers. Next, incorporating generative AI into a business context needs constant fine-tuning of the use case models. This is due to the fact businesses are rapidly evolving, and adapting generative AI models to this quick change needs to be looked at with a careful lens.
The prompt functionalities of generative AI must be carefully assessed to include additional problem-solving techniques, such as calling functions and fine-tuning chain-of-thought prompts, to improve the model’s performance. This problem is attributed to the stateless nature of LLMs.
Approaches Making Generative AI Reliable
As generative AI is one of the earliest entrants into the AI space, many companies are embracing AI to address their enterprises’ business needs. With generative AI gaining traction, companies are now investing in tools and solutions that help enterprises stay current to the evolving nature of AI.
Grounding is one such process that adds additional information and context which are not part of foundation models’ initial trained data. This helps to train and build a foundation model with relevant data, removing the above-mentioned limitations of foundation models.
The following grounding and adapting approaches help companies like SAP to incorporate any new proprietary or open-source LLMs/foundation models into their existing business ecosystem.
The process of passing more task-specific instruction in the prompt through the model improves the relevance of the foundation model. This is referred to as prompt engineering in the AI space. There are two ways to use prompt engineering—zero-shot learning and few-shot learning. Both take a sequential approach, contextually learning through additional instructions provided by the user. The only exception, though, is zero-shot learning is done through the LLM’s reasoning patterns without any standard examples, whereas few-shot learning relies on users’ concrete examples. However, each additional token provided in the context window costs extra and increases the response time.
Retrieval augmented generation (RAG) is a technique which allows LLMs to access external information by injecting domain knowledge through embeddings (numerical representations of data or information that retain semantic or contextual meaning), knowledge graphs, and search results. One standout feature of RAG is that it ensures the credibility of data by providing references/sources on the generated output.
Orchestration tools give LLMs access to external tools, such as the ability to have access to APIs and different libraries. These APIs and libraries provide a definitive reasoning structure for model. Other than these, this also helps LLMs to perform calculations, app functions, plug-ins, prompts, and model chaining (a data science technique that arranges multiple ML models in a sequential order to deliver the desired output.)
Fine-tuning primarily takes a pre-trained ML model and trains it on smaller datasets by adjusting the weights of the model. This process improves model performance on domain-specific tasks with labeled data using supervised, task-specific tuning and instruction tuning.
Reinforcement learning with human feedback (RLHF) involves hundreds of people across different companies who closely monitor each user’s prompts and responses, and adjusts the model to respond appropriately. Two key functions of this technique are that it updates the model over time by incorporating human preference, and adjusts weights to improve the quality through timely manual intervention.
How Does SAP Make Generative AI Enterprise Ready?
The above-mentioned approaches make generative AI accessible, but not always reliable. Enterprise-ready generative AI applications will never be a reality if checkpoints are not set around its perimeter. SAP has taken an organic approach to building responsible AIs that stand the test of age. In the process, it has built certain processes and governances around these models to make them reliable.
With AI ethics placed at the core, checkpoints in this underlying layer ensure the following in SAP’s AI tools:
- Even in the design phase, humans are looped in to work along with AI to ensure the deployed models, supporting their business use cases, are production worthy.
- Developers can now validate output and cross-checks.
- A group of testers break the model before it goes into production to check whether the model deviates from its intended purpose. In the presence of deviation, the team redlines the model and recommends fine-tuning to the model. This feature is referred to as red teaming.
- Businesses are extremely dynamic, and so is consumer/customer behavior. The continuous feedback and monitoring feature allows deployed models to adapt to changing patterns of customer behaviors. The timeframe of this model modification happens once in every two-, three-, and six-month cycles to give the right kind of output.
SAP Business Technology Platform
SAP Business Technology Platform (SAP BTP) offers a comprehensive, multitenant-supported SaaS solution for potential SAP customers. There are many use cases where SAP BTP finds its application—let us use a case where insights from emails are generated instantly as an example. In this case, the user’s intention is to enhance their customer support services through automation and advanced email insights. Emails coming into the system get categorized into different buckets as SAP BTP performs sentiment analysis or agent assessment through LLMs and LangChain (a framework for developing applications that are powered by LLMs).
SAP Build Apps
Released in 2022, SAP Build Apps is the core underlying layer sitting on SAP BTP to embed AI into its own products. This app, intended for AI model developers, is designed with security and governance in mind. Grounded up at enterprise-grade levels, this app comes with the readily available checkpoints that developers or users expect from SAP offerings.
SAP Build Apps helps developers to have access to all foundation models, be they hosted, remote, or even the fine-tuned models.
SAP HANA Cloud
Leveraging grounding capabilities for models to function at their best is a necessity. Using the available business data and context, SAP HANA Cloud helps developers to base grounding on enterprise data through its vector engine and data management capabilities. This, in turn, helps developers to set the models’ functionalities around the right business context.
Adding to this, developers now have access to SAP HANA vector store capabilities.
Generative AI Hub
This tool provides instant access to partner-built foundation models, such as Microsoft Azure OpenAI and Falcon 40B. SAP also has plans to include other foundation models in the near future, such as Aleph Alpha and Meta Llama 2. Now, developers can try using multiple LLMs to find the right ones that power their mission-critical processes with complete control and transparency.
The following three pillars of Generative AI Hub indicate the overarching nature of Generative AI into a business setting:
- Purpose-built and readily available toolset repository to develop models of any scale.
- Derive speedy outcomes by providing access to top-rated foundation models from different providers.
- Ensuring complete trust and control while supercharging the mission-critical business processes.
Joule
Joule is SAP’s copilot designed to understand and interpret enterprise business data for end users. Embedding Joule across SAP’s portfolio of solutions, such as cloud ERP, human capital management, and spend management, incorporates other AI capabilities into solutions. This includes, but is not limited to, automation, natural user experience, and giving additional insights, optimizations, and predictions.
Driving Business Value Using AI in SAP
Being a relevant, reliable, and responsible enterprise application provider, SAP’s application of AI into its business solutions and processes results in the following. Using AI and ML algorithms helps enterprises shift their focus towards business innovation, while enabling higher levels of automation for repetitive tasks. The enhanced, natural user experience blurs the distinction between human and machine interaction. Lastly, augmenting human cognitions and decisions along with AI derives faster insights, better optimizes systems, and provides near-perfect predictions. This is most beneficial when such models are deployed to sort out supply chain related issues.
Conclusion
Owing to the emerging capabilities of LLMs and its successors of the future, businesses must be more than prepared by equipping the right technologies that transcend them into the future. SAP has prepared itself, and its users, for this future by adding generative AI tools into its solutions.
Comments