Tanay’s Newsletter

Tanay’s Newsletter

Share this post

Tanay’s Newsletter
Tanay’s Newsletter
The State of Generative AI Part 2
Copy link
Facebook
Email
Notes
More
User's avatar
Discover more from Tanay’s Newsletter
Musings about tech and business
Over 11,000 subscribers
Already have an account? Sign in

The State of Generative AI Part 2

Where we are and where we are going in audio and video

Tanay Jaipuria
Jan 24, 2023
18

Share this post

Tanay’s Newsletter
Tanay’s Newsletter
The State of Generative AI Part 2
Copy link
Facebook
Email
Notes
More
Share

This is a weekly newsletter about the business of the technology industry. To receive Tanay’s Newsletter in your inbox, subscribe here for free:


Hi friends,

Last week, I wrote about the rapid development of Generative AI, and broke it down by medium generated, discussing the first three. Today, I’ll discuss the remaining three.

  • Text

  • Code

  • Image

  • Audio

  • Video

  • Multi-modal

I recommend reading that piece before this one for a discussion on text, code, and images.

Image
“Futurepunk landscape” generated by Midjourney

Audio

When it comes to using Generative AI for audio, the obvious thing to discuss is audio or voice synthesis.

First, let’s discuss where we are in terms of the state of audio models.

Broadly, I think of audio as music and non-music/voice.

  • On the voice side, Microsoft recently announced VALL-E, a model which is capable of taking 3 seconds of someone’s voice and synthesizing it for use in any application. It can then take any text prompt and output that text as audio in the person’s voice with the ability to control emotion.

  • On the music side, there are several companies working on models which generate music. Examples include Riffusion, which uses stable diffusion to create “images” of spectrograms which are then converted to audio, and Dance Diffusion (by Harmonai) which can generate music and be fine-tuned on specific albums

Try 'Riffusion,' an AI model that composes music by visualizing it |  TechCrunch
Illustration of Riffusion spectograms (Source)

Use Cases

So what are the use cases for audio synthesis?

I. Media and Advertising

Being able to generate specific voices represents a big unlock in Media, where audio synthesis can be used in video games, movies and television for things like dubbing to different languages in the same voice of an actor, creating new characters with specific voices in video grams or just speeding up the filming or editing process in movies/TV.

Resemble was able to bring back Andy Warhol’s voice for this documentary

Examples of companies in this space include:

  • Resemble.ai has its voice cloning models and is used for audio generation across movie and TV use cases among others. It has been used in documentaries by Netflix.

  • Wellsaid Labs can create AI voices that can then be used for audio/video ads or media use cases.

  • Papercup which focuses on generating synthetic voices to dub TV and movies in other languages.

II. Call Centers

Call centers rely on having a set of high-quality voices that have low latency for real-time use cases. Often, they only had a few voices to choose from, from big vendors like Google and Microsoft.

With audio generation, one can imagine voices and accents local to someone’s region being generated to respond to their customer support requests.

Rime is one of the companies working in this space, though some of the ones mentioned above also support real-time requirements needed for these use cases.

Image
Robot Call Center Generated by MidJourney

III. Narration and Accessibility

The synthetic voices generated by AI can also be used in narrations for use cases such as audiobooks, hardware devices such as smart speakers, text-to-speech accessibility options in web browsers, and other similar use cases including education.

For these, most of the companies that generate voices and have a text-to-speech API can be used as a solution.

IV. Music Generation

Generating music or assisting in the generation of music is another prevalent use case. Aside from the music models that can be used directly, companies are building applications in this space including:

  • Boomy allows users to generate music and upload them to music streaming platforms and generate revenue. Boomy users have already generated >10M songs.

  • Soundraw allows users to choose the mood, genre, length, and other attributes and generates music for them.

  • Moises assists musicians in their creative process by leveraging AI to let them separate vocals, modify beats, change pitch, and more.

Moises App: The Musician's App | Vocal Remover & much more | Moises App
Moises is an AI-powered musician assitant

V. Audio Transcription

Audio transcription isn’t technically “Generative AI” but in many cases to get value from the Audio it is first converted to text, and Large Language Models prove very useful at that.

  • Whisper by OpenAI is an open-source model that can be used for transcription and speech recognition.

  • Deepgram is one of the leaders in leveraging AI for voice transcription and speech recognition and understanding both in real-time and offline.


Video

Video in some ways is the ultimate boss of Generative AI since to do it properly in its end state likely requires doing images, audio, and text to some extent.

Today, we don’t have any real models in the wild that can do general video generation a la images or text. But, progress is being made, with a number of companies working on this:

  • Make-A-Video by Meta, while not publicly available, demonstrates the ability to create a simplistic video from a text prompt or an image, as below:

A teddy bear painting a portrait
  • Imagen by Google is another model which can create a 24fps video from a text prompt and is currently not available publicly.

  • Runway, which builds an AI-based video editing product, has teased a text-to-video model, though it hasn’t been made available yet.

    Twitter avatar for @runwayml
    Runway @runwayml
    Make any idea real. Just write it. Text to video, coming soon to Runway. Sign up for early access: runwayml.com
    12:50 PM ∙ Sep 9, 2022
    17,746Likes3,311Retweets
  • OpenAI has confirmed that they are working on a video generation model, but don’t have a definitive timeline.

    “It will come. I wouldn’t want to make a confident prediction about when. “We’ll try to do it, other people will try to do it ... It’s a legitimate research project. It could be pretty soon; it could take a while.” - Sam Altman, CEO of OpenAI

Use Cases

While fully general video generation may not be available yet, there are still several use cases that are already solvable today and that set is only continuing to expand.

I. Sales, Training, and Support

One kind of video that can be easily generated today using AI is those of human avatars saying specific tracks. There are a number of companies working to make it simple to generate these Avatar videos for use cases such as sales (personalized reach outs at scale), training, customer support and more.

Some examples of companies include:

  • Synthesia which is an AI-based avatar video creation platform used across sales and training use cases (below is a video of it in action)

  • Rephrase is another AI-based avatar video creation platform used for personalizing video messaging in marketing and sales-type use cases.

  • D-id is another of these platforms used more for training, learning and corporate comms use cases.

II. Marketing

Increasingly, the default ad format on channels such as Facebook, Instagram and TikTok is video ads. However, they’re quite difficult and expensive to create today, especially for SMBs. As Generative Video evolves, companies will aim to directly take an image of a product/service from the business and autogenerate a 10-30 second video ad creative for them likely with more than just a talking avatar.

While I’ve not come across any that can do this fully yet (other than just avatar ones), one can see how Meta’s Make a Video/Google’s Imagen might be leveraged to convert an image into a short video.

Google's Imagen takes on Meta's Make-A-Video as text-to-video AI models  ramp up | VentureBeat

Similarly, companies like Omneky are working to generate ad assets (initially for images) and understand what drivers are causing certain ones to perform better than others, which can then be used to craft the perfect video assets

III. Insights Extraction

A lot of knowledge and insights is contained in the video medium including meetings and other conversations which can be hard to parse, search or scan relative to the text. Fortunately, there are companies working directly on making use of that knowledge. While not quite “Generative Video”, these companies leverage LLMs nonetheless to summarize, synthesize and extract insights from videos.

Examples include:

  • Gong which is pulling data from video calls (among other things) and then extracting insights to help Sales reps perform better.

  • Fathom helps summarize video meetings and identify to-dos and action items.

Gong vs. Chorus: Which One Is Better?
How Gong works

In general, we’re likely to see many more ways of making videos indexed, searchable and useful, as I tweeted about here:

Twitter avatar for @tanayj
Tanay Jaipuria @tanayj
Product request: A semantic search engine indexing the full content of all (popular) podcasts and TikTok/IG/Youtube creators' videos. With Whisper/Deepgram for transcriptions, OpenAI for embeddings, and Pinecone for semantic search, this seems to be more feasible than ever.
10:34 PM ∙ Dec 24, 2022
57Likes1Retweet

IV. Consumer Social

While we’re not quite there yet, it’s not hard to imagine a world where a lot of the videos we see on TikTok and Instagram Reels will be essentially generated by AI. In particular, videos that tend to have a specific formula and are more constrained in what they show (i.e., the avatar use cases above) will be easy to pick off first.

As one datapoint, the hashtag #deepfake has 1.3B views, with multiple videos going viral daily. Here is one example of Harry Styles.

So far, I’ve not come across a product that can easily wholly generate videos beyond simple avatar talking videos, but AI tools have made it significantly easier to create / edit videos across this and other use cases.

Twitter avatar for @tanayj
Tanay Jaipuria @tanayj
Bytedance is probably working on a DALL·E for video right now which can then generate the perfect videos for users on the fly as they scroll through their TikTok feed
4:45 PM ∙ Jul 24, 2022
81Likes3Retweets

Multi-Modal

To close, I want to touch on multi-modality. In some sense, Video is already multi-modal in that it will require stitching together Audio, Images, and (likely) Text, but there are a few other things worth considering.

I. Multi-modal image/text /video

Today, most of the image models themselves are unable to output text in the output image. But in many cases, such as design/marketing, one might want text overlaid on the image which has to factor in the style of the image and what is on the image to know where to place it. A similar thing may be needed in videos.

This is an area where companies working on applications will likely have to plug gaps in for now with post-processing layers on their image/video models.

Similarly, another example is that of storytelling. Most stories and presentations contain a mix of text and images, in context.

One example of a company doing this is Tome which is an AI-powered storytelling platform that can generate presentations of texts and images via a text prompt. I gave it a prompt to create a presentation about the State of Generative AI and this is what it created below:

Link to full tome - autogenerated with just a prompt.

Another interesting use case here is Chat interfaces. While ChatGPT has shown a lot of people just what might be possible with AI, today it responds with just text. A future advancement might be that it can respond with images or videos as well, depending on the need.

For example, Ex-human is a company building chatbots that can be multi-modal and respond with images and memes as needed.

II. Actions

Generating Images, Text and others is nice, but what would be great is if AI could also interact with our programs depending on what we tell it and take actions on our behalf.

Actions could be simple life / personal task type actions such as:

  • cancel my ticket

  • change my flight

  • remind me to do X

  • purchase X product

Now, some of these already are doable to varying degrees of accuracy and success with Siri/Alexa/Others.

Learn more about JARVIS AI Assistant - Tech Guest Posts Tech Guest Posts |  SIIT | IT Training & Technical Certification Courses Online
Ironman’s Jarvis was one such AI assistant in popular culture

But they could be more complicated such as taking arbitrary actions on any interface, even those that aren’t pre-built with integrations.

One might type in a prompt as text or say something as audio, and the output may be an action.

This is another area where I think a lot of companies will work on, either directly by aiming for artificial general intelligence, or in constrained spaces such as more useful AI assistants (no offense to Siri).

Adept is one of the interesting companies in this space, working towards a foundation model for actions, and has built an “Action” Transformer.

This type of approach could essentially be a new form of interaction with interfaces or can be thought of as RPA on steroids.

In the example below, an instruction of adding a new to lead to Salesforce via a text input is completed by the AI.

Twitter avatar for @AdeptAILabs
Adept @AdeptAILabs
2/7 This can be especially powerful for manual tasks and complex tools — in this example, what might ordinarily take 10+ clicks in Salesforce can be now done with just a sentence.
8:16 PM ∙ Sep 14, 2022
367Likes34Retweets

Thanks for reading! If you liked this post, give it a heart up above to help others find it or share it with your friends.

Share

If you have any comments or thoughts, feel free to tweet at me.

If you’re not a subscriber, you can subscribe for free below. I write about things related to technology and business once a week on Mondays.

Janhavi's avatar
Theophila's avatar
Omkar Ray's avatar
Praveen Kumar peddi's avatar
Olivier Salomon's avatar
18 Likes
18

Share this post

Tanay’s Newsletter
Tanay’s Newsletter
The State of Generative AI Part 2
Copy link
Facebook
Email
Notes
More
Share

Discussion about this post

User's avatar
OpenAI and Anthropic Revenue Breakdown
Breaking down revenue growth, the consumer subscription businesses and the importance of partnerships to the API business
Sep 30, 2024 • 
Tanay Jaipuria
51

Share this post

Tanay’s Newsletter
Tanay’s Newsletter
OpenAI and Anthropic Revenue Breakdown
Copy link
Facebook
Email
Notes
More
5
Employee compensation and one-year equity grants
Motivations for Stripe and Lyft's move to one-year equity grants and why early-stage companies shouldn't follow suit
May 4, 2021 • 
Tanay Jaipuria
13

Share this post

Tanay’s Newsletter
Tanay’s Newsletter
Employee compensation and one-year equity grants
Copy link
Facebook
Email
Notes
More
2
The Rise of the Agentic Workforce
AI colleagues and teammates are everywhere
Jan 29 • 
Tanay Jaipuria
54

Share this post

Tanay’s Newsletter
Tanay’s Newsletter
The Rise of the Agentic Workforce
Copy link
Facebook
Email
Notes
More
2

Ready for more?

© 2025 Tanay Jaipuria
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More

Create your profile

User's avatar

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.