Business Impact of Large Language Models - Workshop Recap
Recordings from our recent workshop on LLMS
In our recent workshop, we covered quite a few interesting topics in the space of how large language models and business constraints impact each other.
We started the day by hearing from Patricia Thaine (CEO at Private AI):
Privacy is a crucial aspect to consider when it comes to chatGPT and LLMs (large language models) in general. There are several reasons why privacy should be a top priority in these technologies. Her talk discussed
Importance of privacy in these systems
The legal obligations and market concerns related to privacy
Incidents highlighting privacy concerns
Measures taken by companies to protect user data
Challenges in corporate environments, and
The risks of re-identification when combining quasi-identifiers.
Suhas Pai (CTO at Bedrock AI) then shared his experience and insights on the business impact of using large language models (LLMs) and the challenges involved in taking prototypes to production. He discussed the impact of LLMs in the business world by looking at the trade-offs in text summarization, and challenges in “getting the right information” in the finance industry. He emphasized the need for careful consideration of what LLMs can actually do, and how they can add value to a company's products and services.
We listened to two startup pitches from Tavis and Chandan and feedback from Moien about how they can improve their pitches as they talk to investors.
Tavis pitched his company, Kadoa.
Kadoa automates data extraction with AI. Data extraction is a $6B market on the cusp of explosion as a result of LLMs. Kadoa currently focuses on web HTML data extraction with plans to expand to PDFs, emails, and more.
Chandan pitched his company, Twig.
Twig empowers developer-focused companies to deliver world-class customer support, with four times less staff using AI agents and contextual domain data. The founders are experienced AI and CX professionals from H2O ai, Workday, and Intel. In just four months we already have $50k ARR.
Several folks from our network presented interesting business use cases!
Richie Youm, a data scientist at WealthSimple, shared the experiences of building a topic modeling model called Ernie. The goal of the Ernie model was to provide a smoother experience for clients and improve service level agreements. They explored techniques like LDA and BERTopic but faced challenges with inconsistent results and noisy data. They rebuilt the taxonomy using GPT models and developed an efficient routing system for improved customer service. The presentation also discussed the potential for an automated customer service system and the importance of data privacy and security.
Sathish Gangichetty, a senior solutions architect at Databricks, discussed the topic of forecasting systems and how they can be disrupted using transformers. He emphasized the importance of forecasting accuracy for companies to improve their performance and increase revenue. Sathish talked about the concept of transformers and their potential to improve forecasting accuracy, specifically focusing on the PatchTST. He also discussed channel independence in time series forecasting using transformers and the application of transformers in product forecasting. He highlighted the potential of transfer learning in time series forecasting and the importance of creating user-friendly interfaces using LLMs. He concludes by discussing the simplicity and accessibility of surfacing results using transformer models and the need for trust in the user experience. The presentation also briefly touched on the challenges of stock prediction using reinforcement learning models.
Ayesha Hafeez, the director of ML Solutions and Architecture at Arctic AI, discusseD the use of LLMs for security compliance assessment. She explained the problem of manual compliance assessment and the benefits of automation using LLMs. Ayesha also provided an overview of the machine learning pipeline and the functional components involved in the solution.
Finally, yours truly spoke about the open source project, SHERPA, and the latest evolution of the project.
I presented the open source project in our community that aims to demonstrate the power of knowledge-ops by leveraging LLMs. The project focuses on operationalizing knowledge, enabling machines to read and provide real-time information. The goal is to enhance communication, collaboration, and project management by utilizing machine learning models with language and reasoning skills. The project involves a “book that writes itself” and a companion slack app that you can use to talk to the book and other resources. I encouraged everyone to join and contribute to the project,
Hope this gives you some fun content for the weekend! :)
Hit reply if you have any thoughts about any of the above, or if you want to speak at our upcoming events.
Cheers
Amir…