top of page
Search
  • Writer's pictureNadja De Maeseneer & Gunjan Shukla

Decoding Generative AI: Beyond the AI Hype Cycle

Earlier this month Gartner published the 2023 Hype Cycle for Artificial Intelligence, singling out Generative AI as a game-changer 'with an impact like no other technology in the past decade.' With the emergence of powerhouse applications like ChatGPT and DALL-E, the technological landscape has been profoundly reshaped across industries.


At Predii we're observing the impact Generative AI has - already today - as part of strategic discussion and mid- to long-term roadmaps.


Since the launch of ChatGPT late last year, Generative AI has been on a journey to transform not only our daily lives but many industry applications alike. The automotive industry, specifically the service and repair business, is extremely complex. Our team at Predii has leveraged AI-powered solutions for years to navigate this complexity - for instance by building guided repair products on historical repair data, or to make predictions about vehicle breakdowns using telematics. Generative AI introduces a whole new dimension to these solutions: First, the ability to smartly retrieve knowledge across data silos, and second, the natural, intuitive summary of key insights in human language.


Starting with the basics: what is Generative AI?

Generative AI Falls under the canopy of Artificial Intelligence models and techniques that extract signals or meaningful patterns for data to bring out or deliver new, previously unseen data that has roots from the original dataset. This original dataset can be known as the examples that the model uses to learn or some labeled data that serves as training and validation material for the model. Generative AI provides variance and functions by grasping complex patterns, structures, and relationships within the data.


This ability to generate data is achieved through various techniques, such as probabilistic modeling, neural networks, and deep learning.


There are several types of Generative AI models such as Generative Adversarial Networks (GAN), auto-regressive models, and transformer-based models. Recent applications are mostly based on transformer-based models such as the GPT series or Llama 2.


How is Generative AI different from the other disrupting technologies?

Broadly, Generative AI protrudes as a distinct technology by its ability to create new content autonomously. Within the broader domain of Artificial Intelligence, Generative AI is oriented towards creation, its focus is on data generation. It often complements these other technologies by providing new data or content for downstream tasks and enhancing other AI technologies. It is proficient at creating synthetic data instances, like images, text, music, or other forms of content, based on patterns extracted from a training dataset.


Models in AI are especially compelling die to billions of parameters that are the secret behind their generation abilities.


In an attempt to quantify these models, let us look at one of the most widely known products out there: ChatGPT 3.5 and ChatGPT 4 are estimated to have ~175 billion and ~1.7 trillion parameters respectively, with a parameter being a numerical value that is associated with each connection between artificial neurons in a neural network.


These parameters are crucial to the network's ability to learn and make predictions or classifications – almost like saying the amount of knowledge that these models have ingested.


Applying Generative AI - in the Automotive Repair Ecosystem and beyond.

As of September 2023, Generative AI has already found successful applications across various domains, thanks to its ability to create synthetic data and content that resembles real-world examples. Predii hosted a thought leadership event on AI in the Automotive Aftermarket this past July, where Leading AI experts emphasized the significant impact of emerging Generative AI solutions and clearly state how

Generative AI will augment human intelligence.

Specifically in the Automotive Industry, we see Generative AI technology take hold in augmenting the most crucial elements of the repair and service ecosystem. Generative AI can retrieve, correlate, and present insights that support in understanding customer concerns, diagnosing vehicle failures, identifying correct/best-fit parts, and in performing repair procedures efficiently. In supply chain applications, AI can reduce production delays and costs and optimize inventory levels.

Generative AI can act as a co-pilot for augmenting workflows and optimizing business processes.

The key here is: make the job of our key players - technician, service advisors, parts counter person - easier.


As often in the AI business: specialization is key. We've said it many times: the automotive repair and service ecosystem is complex. Precision, reliability, and expert knowledge are crucial to building useful, revenue generation solutions.


Tilak Kasturi and Mark Seng recently led an AAPEX webinar, emphasizing the importance of demand for highly domain-specific models that have the ability to analyze historical data effectively, retrieve information accurately, and conclude solutions reliably.


Challenges in Generative AI

While Generative AI has the potential to augment or even replace a large number of creative tasks, these models come with their own set of unique challenges. The key ones are:


  • Reliability and accuracy of output

  • Secure servers to experiment on proprietary data

  • Access to high-speed GPU compute

  • High-quality annotation and data generation by our annotators, and domain experts; labeling enough data for fine-tuning large language models.


To stay on the last aspect here: Fine-tuning a language model refers to the process of taking a pre-trained language model and further training it on a specific, narrower dataset to make it more specialized and adapted to a particular task or domain, e.g. the automotive repair business.


The pre-trained model, often referred to as the "base model," is typically a large model like GPT-3 that has been trained on a vast corpus of text data to acquire a general understanding of natural language.


Large Language Models can be looked at as giant word-predicting machines, trained with tons of texts, books, and other sources, and capable of common sense and organic answering.

Large Language Models are basically the engine behind Generative AI. They are typically based on deep learning algorithms, such as recurrent neural networks (RNNs) or transformers, that can process sequential data and generate output based on the input data. These models are capable of rendering text that is often difficult to distinguish from text written by humans, and they can be used for a variety of applications, including language translation, summarization, question-answering, and even creative writing.


Hallucinations

Reliability and accuracy of output are another key critical aspect of Generative AI. The more domain-specific the use case application, the higher are typically expectations for models to deliver precise, accurate answers.


Widely known as ‘hallucinations’, models can generate outputs that are not grounded in reality and there is no existing data that support the information or generation in the data that the model has been trained on. The result is often plausible-sounding information that is entirely fictional or inaccurate.


Some reasons for such hallucinations are:

  1. Overfitting: Models that have a large number of parameters are susceptible to overfitting. Overfitting happens when a model learns to memorize the training data rather than generalize from it. This can lead to the model producing outputs specific to the training data but not reflecting the underlying patterns in the data.

  2. Complexity: Complex neural network architectures are prone to hallucinations. These advanced models catch intricate details from the training data, this aptness makes them produce unrealistic outputs, this is because the model delves deep within the training it received rather than unraveling the input, therefore this unfamiliarity with the input makes it seem lacking in the contextual front as well.

  3. Foreign data as input: When there’s a significant difference between the training data and the distributed data during deployment, then the model produces hallucinatory outputs as it cannot adapt to the distributed data effectively.

When AI Models lack contextual understanding, they generate outputs that may make sense independently but in a broader context they get illusory.


There are some other reasons, too, like overuse of uncertainty, lack of access to the most updated figures of information, lack of common sense and biased approach.


Strategies to mitigate hallucinations


It is highly crucial to mitigate these issues.


Strategies like fine-tuning on specific datasets, fact-checking responses ("human-in-the-loop"), implementing content filters and context-aware Retrieval Augmented Generation (RAG) workflows can help improve response accuracy and reduce the likelihood of generating problematic content.

Additionally, ongoing research and development in the field aim to address these challenges and make these models more reliable and safer for various applications.


Overall, Generative AI is at the forefront of technology, representing the state of the art in artificial intelligence with its remarkable ability to revamp the automotive prospect. It has the power to transform various functionalities by taping into its ability to generate and provide solutions for different sections of the automotive sphere be it supply chain, quality control, marketing and sales, predictive analytics, predictive maintenance, autonomous vehicles and design and prototyping.


For the successful implementation it will be crucial to also plot a route to eliminate the hallucinations or hiccups that act as a barrier to accuracy and address the challenges of bias, data privacy, and security. Embracing this technology wisely, the automotive industry can steer itself into a future of limitless possibilities and unparalleled progress.

77 views0 comments
bottom of page