🏄 Your guide to AI: August 2022
Hi all!
Welcome to the latest issue of your guide to AI, an editorialized newsletter covering key developments in AI research, industry, geopolitics and startups during July 2022. Before we kick off, a couple of news items from us :-)
I discussed the state of AI in drug discovery with Chris Gibson (Recursion), Rich Law (Exscientia), and Najat Khan (Janssen) with MedCity News. You can watch the video here.
If you’d like to make a slide contribution to the State of AI Report 2022, just reply to this email with your proposal.
We’re back in-person with our non-profit RAAIS one-day summit on Friday 30th June 2023 in London. As usual, we’ll be hosting a top-tier group of large companies, startups, researchers and students all working on various aspects of AI. You can register your interest here.
This edition was with Othmane Sebbouh and Nitarshan Rajkumar. Enjoy!
🆕 Technology news, trends and opinions
🏥 Life (and) science
DeepMind took the next logical step in their approach to AI-first biology by forming a partnership with the Francis Crick Institute where the company will host its first wet lab. This means that DeepMind will itself be running real-world experiments in a physical lab. This could include synthesizing and testing the properties of proteins designed by their AI systems. The outputs of these experiments will constitute key training data and supervision signals to close the design-build-test loop that is common within AI-first techbio companies. Moreover, I’m excited to see this move attract more biologists from pure wet lab work into hybrid computational work. Relatedly, DeepMind and EMBL announced that they’ve released the AlphaFold-predicted structures of nearly all known protein sequences (200 million!).
Hong Kong-based Insilico Medicine has dosed their first idiopathic pulmonary fibrosis (i.e. lung scarring) patient in a phase 1 study using an AI-discovered small molecule that hits a novel target. This constitutes the first time an AI-designed drug has entered clinical testing in China, following Exscientia, Recursion and Relay in the US.
🌎 The (geo)politics of AI
The big deal in the semiconductor industry was Biden signing the $280B industrial policy bill (“CHIPS for America Act”) to build up domestic manufacturing capabilities. This bipartisan policy is positioned as a geopolitical hedge against China’s acceleration in semiconductors. The bill provides $52B in subsidies and tax credits to companies manufacturing chips in the US - a big beneficiary of this will be Intel - and adds $200B for R&D in AI, robotics and other deeply technical fields. Public semiconductor stocks bounced on this news as industry analysts commented that without such subsidies, semiconductor manufacturers wouldn’t be willingly making onshoring moves into the US.
In addition, the US is pushing the Dutch to ban ASML from selling their EUV machines to China as a means of curbing the country’s ability to manufacture leading edge nodes. If this happens, it would be a major blow because ASML runs a quasi monopoly for this key step in the manufacturing chain. On the other hand, lobbyists for DJI, a world leading drone maker based in China, are lobbying Congress to remove the export controls ban on selling Chinese drones to US government and law enforcement. This lobbying is apparently looking successful.
Over in France, STMicro and GlobalFoundries plan a new $5.7B chip factory, again funded by public money in a move to distance France’s reliance on China. The fab will be focused on 18nm chips, which are a fit for the automotive and IOT industries.
Meanwhile in the UK, there was lots of Twitter discussion around the Newport Wafer Fab and why its acquisition by a Chinese-owned company Nexperia signals the country’s lack of strategic prioritization of domestic manufacturing.
New AI and consumer data regulations are being deliberated outside the EU: in Canada, BILL C-27 draws from EU GDPR regulation to enact consumer data privacy protection and also requires AI systems to mitigate the risks of harm. The UK gov published an AI regulation position paper, which pushes for regulation that focuses on high-risk concerns. Charlotte Stix at EuropeanAI has more.
In the defense world, the US Army awarded a Firm Fixed Price contract task for $36M to each of Palantir and Raytheon for development and integration of a Tactical Intelligence Targeting Access Node (TITAN) prototype system. The US Navy began testing manned-unmanned teaming software with their Super Hornets and drones. NATO launched a 15-year innovation fund with a mandate to invest in companies developing security-relevant technologies that could strengthen the alliance. The fund is capitalised with $1B and can invest in both companies and investment funds.
In China, the Communist Party is said to be using AI to detect loyalty to the party. This snippet is pretty wild: "This equipment is a kind of smart ideology, using AI technology to extract and integrate facial expressions, EEG readings and skin conductivity," RFA's translation of the initial Weibo post reads, "making it possible to ascertain the levels of concentration, recognition and mastery of ideological and political education so as to better understand its effectiveness."
Also in China, authorities have levied a massive $1.2B fine on ridesharing company DiDi because of its alleged abuse over data. A few years ago, this would have been quite a shock as the basic Western view of Chinese tech regulation was that there is little (or rather a very different) respect for data privacy…
AI doesn’t have an equal carbon footprint depending on where in the world it is trained. New work trained the BERT language model at different times of the year in data centers around the world. The work shows (below) that carbon dioxide emissions were highest in Germany, central US and Australia - by a factor of 1.7x on average.
🍪 Hardware
In a blog post, NVIDIA introduced a deep reinforcement learning method called PrefixRL tasked with designing arithmetic chip circuits. The company said they’d used PrefixRL to design their latest Hopper GPU architecture, the H100 GPU. NVIDIA said that some of the AI-designed circuits are 25% smaller than and equally fast as those designed by state-of-the-art electronic design automation tools. This thread from Rajarshi Roy, an applied researcher at NVIDIA, serves as an excellent technical walkthrough of the method. NVIDIA’s work follow’s Google and InstaDeep’s chip placement work from 2020.
This month came with its share of bad AV news. Cruise car outages left them frozen in traffic and caused jams in San Francisco. The company is currently going through hard times: General Motors said Cruise lost $500M (more than $5M per day) during the second quarter. Ford-backed AV startup Argo also let go of 150 employees (of 2,000 globally). Things aren’t going better for Tesla: a driver using Autopilot killed a motorcyclist, so Tesla is now involved in 39 of the 48 crashes which are investigated by NHTSA, the US’s traffic safety administration. Last but not least, another piece of news with potentially bigger long-term implications: Andrej Karphathy, who was leading Tesla’s AI efforts, notably on Autopilot, left the company. Karpathy, a major figure in the AI community, gave credibility to Autopilot’s approach to autonomy: an autonomous driving system using only computer vision to navigate traffic, without additional sensors.
Meanwhile in Europe, the new Vehicle General Safety Regulation came into effect. The legal framework introduces a range of advanced driver assistance systems into public road vehicles produced from 2024 onwards. This includes lane keeping, parking assist, and driver attention warning. The EU also plans to propose legislation in September to approve the registration and sale of Level 3 capable (driving without human intervention for some time, but not A to B) self-driving vehicles in member states.
🏭 Big tech
As with text-to-image generation, it seems that each month is bringing its share of ML for code news. In the past two months, we covered the commercialization of Github Copilot and the launch of Amazon Code Whisperer, Tabnine raising a Series B, and Mintlify raising a Seed round to automatically generate documentation for code. It’s now Google and Huawei’s turn. Google revealed that it was internally using an AI-coding assistant. Their approach relies on a hybrid model mixing a modern transformer LM approach and traditional (for code completion) rule-based semantic engine. The model is trained on Google’s monorepo. It takes as input around 1,000 to 2,000 tokens surrounding the cursor and outputs completions of the current line or multiple lines. What’s interesting is the study Google did on their tool’s internal usage: "We compare the hybrid semantic ML code completion of 10k+ Googlers (over three months across eight programming languages) to a control group and see a 6% reduction in coding iteration time (time between builds and tests) and a 7% reduction in context switches (i.e. leaving the IDE) when exposed to single-line ML completion. These results demonstrate that the combination of ML and SEs can improve developer productivity. Currently, 3% of new code (measured in characters) is now generated from accepting ML completion suggestions." Huawei also published its own language model for code called PanGu-Coder, which it says achieves “equivalent or better performance than similarly sized models, such as CodeX, [the model behind Github Copilot]”.
Meta released the third iteration of their chatbot, BlenderBot 3. As with the second one, BlenderBot 3 is open-source. It is built on Meta’s recently open-sourced LLM OPT-175B. This time, anyone (in the US for now) can chat with it on https://blenderbot.ai/. The risk of open-sourcing these chatbots – which Meta warned about – is that they can be driven to spit misinformation and offensive speech. BlenderBot did not disappoint (or did, depending on the perspective). This thread contains a few examples of these undesirable outcomes. That this is specific to Meta (typically because of its training dataset) is debatable – would Google’s LaMDA for example fare better when discussing sensitive questions? We don’t have an answer because LaMDA is unfortunately not publicly accessible.
We have covered text-to-image models at length in the past few months, so we won’t be spending too much time discussing them in this section (see more on Stable Diffusion below). Note though that OpenAI has made DALL-E available in beta: users have free credits and can buy 115 more image generations for $15. Meta released Make-a-scene (the arxiv paper dates back to March), where users can generate images based on both text and sketches.
🔓 Open source AI
This was a big month for open-source AI projects led by community-driven organizations. First up was the release by BigScience of BLOOM, a 175B parameter autoregressive language model. The collaborative initiative led by HuggingFace brought together over 1,000 researchers from over 70 countries to train this model on the Jean Zay supercomputer in France. The project took over a year starting in May 2021, with the training process itself taking 4 months from March 2022 using 384 A100 GPUs, and was funded by a €3M grant from the French government. BigScience’s work here demonstrates the viability of large-scale collaborative AI projects using public supercomputing resources, and of having all the work be done openly (check out a write-up of their technical challenges and findings). You can try the model for free here.
Researchers at Tsinghua University released GLM-130B, an English and Chinese language model. The architecture leverages an infilling objective slightly adapted from the standard GPT training objective, but performs better than all other openly available large language models (OPT, BLOOM) as well as the original GPT-3. Most remarkable is that this came from an academic research lab, though it was reliant on donated resources from a startup (768 A100 GPUs for 2 months). This work was led by Jie Tang at the Beijing Academy of Artificial Intelligence, who previously led a position paper on Big Models – these initiatives signal a serious effort by Chinese academics to compete at the level of elite Western AI companies.
It seemed as if DALLE2’s remarkable capabilities would be locked behind their paid API, but the upstart team at stability.ai have been working under the radar against this, procuring a cluster of 4,000 A100 GPUs to support their research project Stable Diffusion. This is a text-to-image model which performs at a similar level to DALLE2, but is free to access in beta now, with model weights accessible to researchers and plans to make them publicly available as well soon. A significant focus of their efforts has been on reducing the model’s size to make it more accessible: it runs on under 6GB of GPU VRAM, and even on a Macbook Air. This project leveraged the large communities of EleutherAI and LAION, showing again that distributed research and engineering projects are rapidly catching up to the work of even the best private entities like OpenAI.
📑 Research
Here’s a selection of impactful work that caught our eye.
YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object Detectors, Academia Sinica. If you’ve started looking into computer vision in the mid 2010s and haven’t been following closely, there is a chance you know YOLO (You Only Look Once) as an old real-time object-detector that might have fallen out of favor. This couldn’t be further from the truth. The model has been consistently improved from its first iteration in 2015 to the last one released the past month, YOLOv7. YOLOv7 indeed outperforms all other object detectors in speed and accuracy on the MS COCO dataset. YOLO’s story is a nice tale of how years of engineering efforts and integration of adjacent computer vision ideas maintain an “old” system in best-in-class form. Note, however, that one of YOLO’s key authors quit computer vision because he disagreed with its use in adversarial contexts such as military AI.
Leakage and the Reproducibility Crisis in ML-based Science, Princeton University. Machine learning is increasingly used in scientific fields. This means that advances in ML result in improvements in science (AlphaFold2 is a good example), but also that flaws disregarded by the community in ML systems extend to scientific conclusions drawn from ML model predictions. One such flaw is data leakage, which refers to several problems with data preparation (no train-test splits, non-independent train and test data, test data drawn from a distribution other than the one of interest, etc.). Princeton researchers did a literature survey which identified 329 science papers affected by such problems. They insist on the fact that the modeling errors caused by data leakage could result in a reproducibility crisis in science given the increased use of ML methods.
Training Generalist Agents with Multi-Game Decision Transformers, Google. Researchers train and scale a Decision Transformer-based RL model to play 41 Atari games at once. If this sounds familiar, it’s because it is: the last few years’ progress in language and vision has been fueled by scaling large language models and training them on diverse datasets. This paper presents further evidence (after DeepMind’s Gato) that the same strategy works well for RL and games.
POET: Training Neural Networks on Tiny Devices with Integrated Rematerialization and Paging, UC Berkeley. Our smartphones can benefit from the latest advances in deep learning models thanks to cloud computing. Data can be sent over to the cloud for training large models, but this introduces privacy issues and data transfer costs. Computing directly on memory-scarce battery-operated devices constrains on-device models to be relatively small. But for the first time, researchers succeeded to fine-tune ResNet-18 and BERT models on a Cortex-M class embedded device in an energy-efficient manner.
Scaffolding protein functional sites using deep learning (Science). University of Washington, EPFL, Harvard University. Led by the Baker lab that developed RoseTTAFold for protein folding predictions, this work demonstrates the design components of proteins that have desired functional activity (e.g. small molecule binding site or enzymatic activity). Unlike prior work, the described system does not require the user to “specify the secondary structure or topology of the scaffold and can simultaneously generate both sequence and structure.”
Efficient Training of Language Models to Fill in the Middle OpenAI. OpenAI introduced new edit and insert capabilities to their API in March – this paper explains how this was done through a simple augmentation of data to the standard GPT training procedure. This fill-in-the-middle augmentation (FIM) splits an input into three pieces, moves the middle piece to the end, and concatenates these pieces with a sentinel token. The key finding from this research was that incorporating this augmentation in the pretraining objective (even at levels as high as 50% of the data) enables these new capabilities without harming standard left-to-right generation quality. Tweet thread by an author here.
💰Startups
Funding highlight reel
Seedtag, a Spanish company which offers machine learning-based contextual advertising, raised over $250M from Advent International.
Cleerly raised a $223M Series C led by T. Rowe Price and Fidelity. The company aims to reduce future heart attack risk by quantifying atherosclerotic plaque in the arteries, the primary factor causing heart attaques.
The US-based autonomous flying company Merlin Labs raised a $105M Series B led by Snowpoint and Baille Gifford. The company currently offers AI software to help with human-piloted civilian planes, and has announced “an eight-figure contract with the United States Air Force to bring autonomy to the service’s C-130J Super Hercules transport aircraft, the most-used cargo platform in the fleet.”
Aisera raised a $90M Series D led by Goldman Sachs and Thoma Bravo. The company offers software based on natural language processing and robotic process automation to analyze and answer customer requests.
BigHat Biosciences raised a $75M Series B led by Section 32. The company is building a platform which uses AI for antibody design.
AI21 Labs, which builds large language models and aims to compete with the likes of OpenAI, raised a $64M Series B led by Ahren Innovation Capital Fund. The deal values the company at $664M.
Theator raised an additional $24M to its initial $15.5M Series A from Insight Partners and other investors. The company uses computer vision to automatically analyze surgical video recordings and provide insights to surgeons.
Zesty AI, which provides property insurers with models that evaluate wildfire and other climate risks, raised a $33M Series B round led by Centana Growth Partners.
Arena AI raised a $32M Series A led by Initialized Capital and Goldcrest Capital. The company uses AI to help companies with pricing, inventory management and quality assurance.
Related to what we wrote earlier in the hardware section, Celus, a German company that uses AI to automate circuit board design, raised a €25M Series A led by Earlybird Venture Capital.
You.com, an AI-powered search engine founded by Salesforce’s ex-chief scientist and MetaMind founder (acquired by Salesforce) Richard Socher, raised a $25M Series A led by Radical Ventures.
Deci raised a $25M Series B led by Insight Partners. The company uses Neural Architecture Search to provide companies with neural networks architectures that are faster to train and deploy.
Rebellion, a Korean AI chipmaker, raised a $23M Series A extension from KT, a Korean telecom company, following a $50M round last month led by Pavilion Capital.
MarqVision, which offers an AI-powered platform to detect counterfeits and protect IP, raised a $20M Series A led by DST Global Partners and Atinum Investment.
Datch, a company building an AI-voice assistant to help factory workers with their reporting, raised a $10M Series A led by Blackhorn Ventures.
HiddenLayer, which aims to protect deployed AI models from adversarial attacks, raised a $6M seed round led by Ten Eleven Ventures and Secure Octane.
Bobidi, which is creating a product to reward developers for testing AI models – the equivalent of bug bounties in traditional software, raised a $5.8M seed round from multiple investors, including Y Combinator and We Ventures.
Drover AI raised a $5.4M Series A led by Vektor Partners. The company uses computer vision to detect scooters' sidewalking. The company sells their solution to scooter and bike operators helping them improve safety and win city permits.
Phaidra, an AI-first industrial process optimization startup founded by DeepMind alums, raised $25M.
Exits
Databand.ai, an early stage data observability and quality monitoring company, was acquired by IBM for an undisclosed amount. This category has seen many entrants in the last 18 months with venture funding quite possibly outstripping customer demand. We’d expect to see more consolidation into platforms with a broader scope as a result.
Reinfer.io, a London-based NLP company focused on understanding customer conversations and feedback, was acquired by UiPath for an undisclosed amount. Reinfer was an early mover in enterprise NLP having started in 2015 by graduate students at UCL, well before the Cambrian explosion of large language models. Within UiPath, Reinfer is a great on-ramp into RPA and an effective streamliner of complex processes that involve parsing meaning and priorities from text.
Sonatic, a London-based voice synthesis company, was acquired by Spotify for €91M led by the company’s VP of Personalisation. This is another example of a voice synthesis/cloning company finding a scalable use case by joining a much larger product-led company: recall that Lyrebird joined Descript, Voysis joined Apple, amongst others.
Airobotics, makers of an autonomous drone platform, was acquired by rival American Robotics for an undisclosed sum. The former is mainly used for security and surveillance use cases, while the latter is used in the industrial sector (e.g. mining, oil and gas).
Inspection2, a UK-based company running industrial inspections using computer vision, was acquired by DroneBase, which itself helps customers capture, process and collaborate on visual data procured from drones.
Hummingbird Technologies, a UK-based geospatial data analytics company focused on regenerative farming, was acquired by Agreena, which helps farmers track their emissions and transition to regenerative farming.
RoadBotics, a CMU spinout offering a computer vision solution for road quality monitoring, was acquired by tire maker Michelin.
---
Signing off,
Nathan Benaich, Othmane Sebbouh and Nitarshan Rajkumar
14 August 2022
Air Street Capital | Twitter | LinkedIn | State of AI Report | RAAIS | London.AI
Air Street Capital is a venture capital firm investing in AI-first technology and life science companies. We invest as early as possible and enjoy iterating through product, market and technology strategy from day 1. Our common goal is to create enduring companies that make a lasting impact on their markets.