Thinking Machines Lab, the AI startup founded by former OpenAI CTO Mira Murati, has officially launched its debut product, "Tinker". Tinker is an API designed to simplify the process of fine-tuning AI models, offering a user-friendly interface for researchers, developers, and other AI enthusiasts. The announcement was made on October 1st on X.
Murati, who served as OpenAI's CTO and briefly as interim CEO in 2023, founded Thinking Machines Lab after leaving OpenAI in 2024. The company has quickly gained attention, assembling a team of AI developers and researchers. In a seed round led by Andreessen Horowitz in June 2025, the startup raised $2 billion at a valuation of $12 billion, signaling strong investor confidence in its potential. Other prominent investors include Nvidia, AMD, and Cisco.
Tinker aims to address the complexities associated with training large language models (LLMs) by providing clean abstractions for writing experiments and training pipelines, while handling distributed training complexity. The API empowers researchers and developers to experiment with models by giving them control over algorithms and data. According to Thinking Machines Lab, switching between different models can be achieved with minimal code changes; for example, swapping a lightweight model for a massive mixture-of-experts system like Qwen-235B-A22B can be done by changing a single line of Python code.
"Tinker brings frontier tools to researchers, offering clean abstractions for writing experiments and training pipelines while handling distributed training complexity," Murati stated on X. She believes Tinker will enable novel research, custom models, and solid baselines. The company's blog post emphasizes that Tinker advances their mission of enabling more people to research cutting-edge models and customize them to their needs.
The platform is currently available in private beta, with a waitlist for researchers and developers. Thinking Machines Lab plans to offer it for free initially, with a usage-based pricing model to be introduced in the coming weeks. Tinker supports fine-tuning of open-source models like Meta's Llama and Alibaba's Qwen. To assist developers, the company is also releasing the Tinker Cookbook, an open-source library with preconfigured recipes.
Several research groups have already tested Tinker, including Princeton's Goedel Team, which trained mathematical theorem provers, and Stanford's Rotskoff Chemistry group, which fine-tuned models for chemistry reasoning. Berkeley's SkyRL group also ran multi-agent RL experiments, and Redwood Research used Tinker on difficult AI control tasks.
While some experts express that Tinker is a useful but not groundbreaking tool, it is a step towards democratizing access to advanced AI capabilities and encouraging innovation. Tinker utilizes LoRa (low-rank adaptation) to simplify the fine-tuning process and reduce costs. Murati believes this product is a turning point towards the democratization of large-scale model development.