Thinking Machines Lab Aims to Eliminate Randomness in AI Model Responses with Groundbreaking Research
In a groundbreaking development, details have emerged about one of Mira Murati’s Thinking Machines Lab projects following their significant $2 billion seed funding. The project, aimed at creating AI models with consistent responses, was unveiled in a research blog post published on Wednesday.
The blog post, titled “Eliminating Randomness in Artificial Intelligence Model Responses,” delves into the underlying causes of the randomness observed in current AI model responses. For instance, repeatedly querying ChatGPT with the same question often yields varied results. This inconsistency is generally accepted within the AI community, considering today’s models as non-deterministic systems. However, Thinking Machines Lab believes this issue can be resolved.
The post, penned by researcher Horace He from Thinking Machines Lab, proposes that the root cause of AI model randomness lies in the assembly method of GPU kernels – small programs running within Nvidia’s computer chips during inference processing (the process following user input in ChatGPT). By meticulously managing this orchestration layer, He suggests that AI models can be made more deterministic.
Besides enhancing the reliability of responses for businesses and scientists, such deterministic AI models could also optimize reinforcement learning (RL) training. RL is a process that rewards AI models for accurate answers, but the slight variations in answers result in noisy data. Consistent AI model responses could streamline the entire RL process, according to He. Thinking Machines Lab has mentioned plans to utilize RL for customizing AI models for businesses in the future.
While the specifics of their first product remain undisclosed, Murati, former CTO of OpenAI, revealed that it will cater to researchers and startups developing custom models. The blog post marks the beginning of Thinking Machines Lab’s new series titled “Connectionism,” which aims to share research findings, code, and other relevant information with the public, fostering a collaborative research culture.
Thinking Machines Lab’s commitment to open research echoes that of OpenAI when it was first established. However, as OpenAI has grown larger, it has become more closed-off in its operations. It remains to be seen whether Thinking Machines Lab will maintain this pledge.
This research blog offers a rare peek into one of Silicon Valley’s most enigmatic AI startups. While the exact direction of the technology is not yet clear, it signifies that Thinking Machines Lab is addressing some of the most pressing questions in the forefront of AI research. The real challenge lies in successfully solving these problems and turning them into marketable products to justify their impressive valuation.