Contrary Research Rundown #110
The long tail of AI and the power law of hype, plus new memos on Vannevar Labs and Medable.
Research Rundown
Artificial intelligence has received the lion’s share of venture investors’ attention and capital since OpenAI launched ChatGPT in November 2022. Considering the following data points:
75% of the startups in YCombinator’s Summer 2024 cohort (156 out of 208) were working on AI-related products.
In Q2 2024, 49% of all venture capital went to artificial intelligence and machine learning startups, up from 29% in Q2 2022.
In 2020, the median pre-money valuation for early-stage AI, SaaS, and fintech companies was $25 million, $27 million, and $28 million, respectively. In 2024, those figures are $70 million, $46 million, and $50 million.
OpenAI, which was unprofitable and on pace to generate $4 billion in annual revenue, was able to raise new capital at a $157 billion valuation in October 2024 (implying a 39x revenue multiple).
However, even though the spotlight currently rests on model companies like OpenAI and Anthropic, and AI-native companies like search engine Perplexity and transcription tool Descript, the number of “non-AI” companies that will be impacted by artificial intelligence far outnumbers companies whose core business is AI-focused. We collectively refer to the impact of artificial intelligence on these other companies as “the long tail of AI.”
The ways that companies within this long tail have used AI are as diverse as the companies themselves. For example, did you know that Walmart has developed its own AI models to improve the customer shopping experience? Or that Boston Consulting Group gave all of its employees access to ChatGPT after seeing that the chatbot gave consultants a 40% performance boost on creative tasks?
To explore how non-AI companies are integrating AI, we put together a deep dive that we published this week on The Long Tail of AI. Because AI is developing and changing so quickly, we created a four-piece framework that categorizes different AI integration strategies based on their resource intensity:
Building an independent, proprietary model: this is the most resource-intensive way to leverage artificial intelligence and is generally reserved for companies that have large, novel data sources from which they can derive unique insights and the human and financial capital needed to train a new model from scratch.
Leveraging proprietary closed-source models: building on closed-source models such as OpenAI’s GPT models or Anthropic’s Claude, which are easy to access via API, have been trained on billions of parameters, and can generate accurate, detailed outputs across a variety of fields, from coding to customer service.
Open-source models: Models like Mistral or Meta’s Llama, are also powerful tools, with Llama 3.1 being trained on 405 billion parameters. Unlike closed-source LLMs, however, open-source models provide companies with increased transparency and flexibility, as model weights can be adjusted to meet specific customer needs.
Third-party AI tools, such as ChatGPT, are the easiest to integrate as customers can simply pay to use a fully developed tool instead of investing in building or adjusting models internally.
Today’s deep dive uses lessons and examples from 14 companies, from retail giant Walmart to private browser startup Brave, to explore how different businesses are thinking about their AI integration strategies today. For more on this, read our full breakdown of how non-AI companies are using AI.
Vannevar Labs' flagship software, "Decrypt", is a foreign text workflow platform that helps intelligence officers find patterns and insights in vast amounts of battlefield information, translate foreign languages, and search for key documents. To learn more, read our full memo here and check out some open roles below:
Forward Deployed Engineer - Remote
Medable streamlines and automates manual tasks, provides analytical insights, and speeds up the clinical trial process for a faster time to market. To learn more, read our full memo here and check out some open roles below:
Senior Director (Business Development) - Remote (US)
VP (Sales) - Remote (US)
Check out some standout roles from this week.
Vertex | San Francisco, CA - Founding Engineer
Superhuman | Remote - Lead Sales Engineer, Revenue Operations Analyst, Senior Engineering Manager
Parafin | San Francisco, CA - Analytics Engineer, Product Manager (Capital Growth), Senior Software Engineer (Infrastructure)
Memora Health | Remote (US) - Clinical Program Manager, Senior Software Engineer, General Interest
Chronosophere | Remote (US) - Senior Sales Engineer, Manager Engineering (Logging)
AI is driving tech giants to make massive infrastructure investments, with Amazon’s recent quarterly capital expenditure reaching $22 billion — surpassing traditionally high-spending sectors like oil and gas.
Anthropic hired Kyle Fish, an AI welfare researcher, to explore the welfare of its AI systems. Fish’s role involves tackling deep questions about what capabilities might make AI systems deserving of moral consideration, how to identify such traits, and what actions companies could take to protect these interests if they exist.
Security researchers found that Hugging Face, an online repository for generative AI, has been unknowingly hosting files with hidden code capable of compromising data. These files can potentially steal information, including tokens used for payments with AI and cloud services.
Bowery Farming, a vertical farming company growing produce in controlled indoor environments using hydroponic farms, shut down after it failed to secure more financing. This is after it had previously raised $700 million and achieved a valuation of $2.3 billion.
Perplexity launched an Election Information Hub to help voters understand key issues, make more informed decisions, and more easily track election results by leveraging data from the Associated Press.
Amazon CEO Andy Jassy hinted at an “agentic” version of Alexa, code-named “Remarkable Alexa” internally, that could take action on behalf of a user during Amazon’s Q3 2024 earnings call.
Leopold Aschenbrenner, a former OpenAI researcher, wrote a manifesto titled “Situational Awareness”. It discusses his prediction that AI will eventually become powerful enough to carry out AI research itself.
In March 2025, SpaceX may attempt to transfer propellant from one orbiting Starship to another, a milestone that would pave the way for an uncrewed landing demonstration of a Starship on the moon.
Perplexity CEO Aravind Srinivas offered the New York Times AI services to step in for tech workers who were on strike.
Electricity bills are rising for everyday consumers because data centers for AI are straining the power grid due to the massive electricity demand.
Elon Musk’s xAI is reportedly raising a new round of funding at a $40 billion valuation, which could be up to four times more than what X, the social network, is worth.
Scale AI announced Defense Llama, an LLM built for American national security. Meta collaborated to create this LLM which is available for integration into US defense systems.
OpenAI hired Caitlin Kalinowski, former head of Meta’s Orion AR glasses initiative, to focus on its robotics work. Kalinowski’s announcement came on the same day that OpenAI invested in Physical Intelligence, a robot startup in San Francisco.
Anthropic released the Claude AI 3.5 Haiku model on Amazon Bedrock. Haiku is “the next generation of Anthropic’s fastest model, combining rapid response times with improved reasoning capabilities, making it ideal for tasks that require both speed and intelligence.”
The first dedicated transformer application-specific integrated circuit called Sohu was launched this week. Sohu claims to process AI models 10x faster and cheaper than GPUs and achieves over 500K tokens per second of throughput.
Anthropic has announced plans to grant US defense and intelligence agencies access to its Claude AI models. Through a partnership with AWS and Palantir, Claude will be integrated into government initiatives such as data processing and document preparation.
Amazon is considering increasing its investment in Anthropic, depending on whether or not Anthropic trains its AI models using Amazon-developed silicon hosted on AWS.
At Contrary Research, we’re building the best starting place to understand private tech companies. We can't do it alone, nor would we want to. We focus on bringing together a variety of different perspectives.
That's why we're opening applications for our Research Fellowship. In the past, we've worked with software engineers, product managers, investors, and more. If you're interested in researching and writing about tech companies, apply here!
Nice insightful exact... Kinda like Ai.. 😂