AI-generated meeting summaries – smart assistants get smarter

Summarization is a growth area for LLMs as the rise of AI-generated meeting summaries and other enterprise briefing apps illustrates.
20 March 2023

Data room: AI-generated meeting summaries can gather insight from participants. Image credit: Shutterstock Generate.

Getting your Trinity Audio player ready...

Enterprise uses for artificial intelligence (AI) are on the increase. And adding to a fast-growing list of applications for large language models (LLMs) – such as the newly updated GPT-4, which is being rolled out across OpenAI’s suite of services – is meeting summarization. Machine learning tools have long been able to generate calendar entries based on key phrases in emails or instant messages. And with online meetings commonplace, connecting remote workers or catering for hybrid on-premises and at-home scenarios, there’s a wealth of productivity gains up for grabs, including AI-generated meeting summaries.

Online platforms such as Zoom, Teams, and other meetings providers have made great strides in speech-to-text conversion. And, today, the accuracy of meeting transcriptions has never been higher. But as with any long document, when the time window is short, it’s helpful to have an executive summary that puts the essential details up front. AI-generated meeting summaries can distill long meeting discussions into key action points, and there are a couple of approaches that developers can deploy to achieve that.

Extractive versus abstractive summarization

Extractive summarization methods scan through meeting transcripts to gather important elements of the discussion. For example, insights could include meeting agenda items, attendee names, dates, sales figures, and other items of interest. The important point about extractive summarization is that it pulls together material that can all be found in the original source text – a meeting transcription, in this case. But, as mentioned, there are other tools in the meeting summarization box.

Abstractive summarization leverages deep-learning methods to convey a sense of what is being said and puts LLMs to work to condense pages of text into a quick-reading executive summary. Some of the phrases may be similar to those in the transcription, but abstractive summarization is much more than just copy and pasting from the input data. “It’s written from scratch based on the data that’s been given to it,” Nigel Cannings, CTO of Intelligent Voice, told TechHQ. “And the style is influenced by what was in the original material.”

Online tools such as OpenAI’s Research Preview of ChatGPT make it straightforward for users to put LLMs to work to generate a synopsis of a longer document. But ChatGPT’s context length – the length of the prompt plus the maximum number of tokens in the completion – is relatively short (or at least it used to be). GPT-3.5, which formed the basis of ChatGPT at launch, had a context length of 4000 tokens. And given that 100 tokens (the elements used to represent different parts of the user prompt and AI-generated output) equate to around 75 words, the LLM could potentially struggle with meeting transcriptions that run into many thousands of words.

Structured data

Intelligent Voice has its own secret sauce for chopping up meetings into digestible elements. And unsurprisingly, Cannings wasn’t about to share any company IP with TechHQ. But he did point out that the firm can leverage its speech-to-text knowledge to focus on more intense debates taking place within a longer discussion. “Meetings typically have a structure, which can be stitched together,” he added.

Model updates will help too, when it comes to deriving Ai-generated meeting summaries based on longer transcriptions. For example, OpenAI’s GPT-4 represents a notable upgrade on GPT-3.5. “Our standard GPT-4 model offers 8,000 tokens for the context,” writes OpenAI in notes to users on the differences between its GPT-4 models. “We also offer an extended 32,000 token context-length model, which we are rolling out separately to the 8k model.”

Figures released by meeting transcription firms highlight the scale of the potential market for AI-generated meeting summaries. In February 2023, Otter.ai announced that its tool had been used to transcribe over 1 billion meetings. And in the blog post, the Mountain View, California-based technology firm sign-posted that it was launching software that goes beyond speech-to-text conversion. “OtterPilot uses AI to automate the entire meeting process with automated live notes (auto-join and auto-share), automated slide capture, and automated summaries,” writes the company on its website.

Systems integration

There are many options to choose from for users that are looking to streamline their post-meeting operations. Supernormal claims that its software – which is compatible with Zoom, MS Teams, and Google Meeting, based on advertized product details – saves users 10 minutes every meeting thanks to auto-generated meeting notes. What’s more, notes can be generated in multiple languages. Other firms offering smart meeting notes and live transcription include Colibri and Avoma.

Colibri was founded by a team of machine learning, AI, and natural language processing (NLP) experts (from Google, Nuance, Cisco, and other centers of excellence, including university labs) with the common goal to make meetings more productive and enjoyable. Avoma, too, which is an acronym for ‘a very organized meeting assistant’, believes that AI-generated meeting summaries can help teams raise productivity and reach goals faster.

Without a doubt, there’s much to celebrate in making meetings less time-consuming and more efficient. But before putting systems to the test and feeding company plans and employee thoughts into the unknown, make sure to visit the AI summarization provider’s security page on their website to understand how your data will be protected.