We are excited to share a summary of our recent thought leadership event "Generative AI Powering Automotive Service Innovation". The event, co-hosted by Cooley LLP, brought ~40 industry leaders to Palo Alto, CA to discuss the impact, opportunity, and challenges of Generative AI in Automotive Service.
Our panels featured speakers from both AI and the Automotive Service Ecosystem: Adam Cheyer, Adam Galper, Babak Hodjat, Sonal Gupta, Rajeev Chand, Rex Green, Tim Eisenmann, Behzad Rassuli, and Tony Rimas.
Here are some of the key takeaways from the event combined with Predii’s key insights on Generative AI:
Generative AI is groundbreaking, more than it is a hype: Leading AI Researchers are surprised by what it can do, e.g. generating code (ref. Github co-pilot Introducing GitHub Copilot X · GitHub), multi-modal generative capabilities LLaVA LLaVA (llava-vl.github.io).
Generative AI is already useful: it can augment tasks and be a true value add. There are speculations of many more roles getting created: AI will not replace humans in the workforce but humans knowing AI probably will. AI will augment human intelligence.
Fine-tuning works well when you know your downstream task well and have the requisite gold-standard (task-specific) data to fine tune the Large Language Model (LLM). Fine-tuning gives direction to the humongous models about what to do with their knowledgebase. Proper context retrieval is required (like Retrieval Augmented Generation architectures) for LLMs to remain factual and avoid hallucinations.
LLMs appear to answer queries requiring causal knowledge (though memory). Ongoing discussions and deeper investigations are under way.
Personalized Data will need to be carefully assessed under the guidelines of regional data privacy standards.
“Creative” controls (like temperature, prompting) are fascinating. To achieve good generative behavior, some creativity might be desirable!
LlaMa 2.0 is a great move from Meta. It just upped the game for (actual) Open (source) AI and encourages further exploration of LLM applications.
AI-powered applications are most successful when they interconnect ‘knowing’ (factual domain specific knowledge) an ‘doing’ (supporting, augmenting workstreams)
As with any previous Innovation Cycles, downstream applications and economic impact will evolve over time. However, the pace of change is going to be much faster (2-5 years) compared to previous innovation cycle of Internet/eCommerce era (10-30 years). The Automotive Service Industry has a unique opportunity with some immediate applications where Generative AI can instantly add value:
Generative AI can augment repetitive tasks in consumer interaction e.g. Virtual Service Advisor, powering predictive service applications. CRM and Marketing applications where virtual assistants can provide intelligence campaign management.
Collision industry applications where specialized knowledge related to a specific vehicle body needs to be extracted carefully from OEM documentation.
Repair industry applications with proprietary knowledge on technical procedures, calibration requirements
General purpose LLMs need to be carefully assessed and fine-tuned to be applicable for industry applications.
Pre-training is resource intensive in terms of costs and carbon prints.
Hallucinations are a real problem in LLMs and need to be addressed, specifically for quality & safety relevant applications
The question is: how can we lead the way in applying AI to solve these problems?
Here are some additional references:
Understanding Generative Artificial Intelligence and Its Relationship to Copyright (house.gov)
AI Should Augment Human Intelligence, Not Replace It (hbr.org)
ChatGPT Creator Sam Altman Says, 'Jobs Are Definitely Going to Go Away' (businessinsider.com)
Joseph Schumpeter – Theory of Innovation Cycles, 1942
Lastly, consider this a casual teaser: we are already in the early stages of planning a follow-up event. We will keep you informed as the details unfold, and we sincerely hope you can join us.
Curious about the event or Predii's work in Generative AI? Let's talk!
Comments