🌕 Gate Square · Creator Incentive Program Day 8 Topic– #XRP ETF Goes Live# !
Share trending topic posts, and split $5,000 in prizes! 🎁
👉 Check details & join: https://www.gate.com/campaigns/1953
💝 New users: Post for the first time and complete the interaction tasks to share $600 newcomer pool!
🔥 Day 8 Hot Topic: XRP ETF Goes Live
REX-Osprey XRP ETF (XRPR) to Launch This Week! XRPR will be the first spot ETF tracking the performance of the world’s third-largest cryptocurrency, XRP, launched by REX-Osprey (also the team behind SSK). According to Bloomberg Senior ETF Analyst Eric Balchunas,
AI Sector: Recent Technology Highlights and Investment Opportunities
Source: 36Kr God Translation Bureau
Editor's note: Artificial intelligence technology is developing at a rapid pace, and there are many start-up companies in the field of artificial intelligence that have stood out. In this article, let’s take a look at which promising AI startups have been picked up by investors from the likes of Sequoia Capital and Kleiner Perkins Caufield & Byers. This article is from compilation and I hope it can inspire you.
If you only have a few minutes to spare, here are the most exciting startups about artificial intelligence that investors, operators, and founders should know about.
Artificial intelligence is the main thread of this year's technology story. Since the last “What to Watch in AI” series, the field has continued to attract capital, talent, and attention. Of course, not all attention is positive. Despite widespread excitement about the technology's capabilities, over the past four months industry heavyweights have expressed their concerns and regulators have begun devising some safeguards. In the coming months and years, artificial intelligence will have a sweeping impact on our lives and create new winners and losers across the globe.
Our "What to Watch" series is designed to help readers prepare for the times ahead and envision the future more clearly. This is a great starting point for those who want to understand the technologies emerging on the artificial intelligence frontier and take advantage of the changes taking place. To do this, we invited the most impressive investors and founders in the field of artificial intelligence to introduce the startups they believe are the most promising.
1. Alife
Using artificial intelligence to improve IVF technology
In any reproductive process, there are moments that require human decision-making, and the two most relevant links to IVF are "ovarian stimulation" and "embryo selection."
"Ovarian stimulation" refers to determining the dose of medication a patient receives to stimulate the growth of follicles in the ovaries, and when to give a trigger injection to stimulate the follicles to release eggs. The timing of the trigger shot is crucial; if it's too early, you might get immature eggs; if it's too late, you might get eggs that are too mature, or you might not get as many eggs as possible.
"Embryo selection" refers to choosing which fertilized egg to use and implant. Currently, clinicians and embryologists, like most medical professionals, base their decisions on a combination of their own experience and training, morphological grading systems, and trial and error. If the dose or timing isn't right in one cycle, they will adjust it in the next cycle. This requires very high professional competence of doctors, and at this point, doctors have varying levels of skill, and their skills are very important to the results. For fertility, a severely supply-constrained market, that means a hefty price tag, especially if you want to see optimal results.
Alife is building artificial intelligence tools to improve in vitro fertilization (IVF) outcomes. The company uses artificial intelligence tools to provide practitioners with "superpowers" to enhance their decision-making accuracy by leveraging massive input and outcome data sets. Now, through a simple interface, doctors can enter a patient's characteristics and receive precise recommendations at key moments in the fertility journey, derived from the results of thousands of previous cycles. These data sets come from vast amounts of patient information that already exist, and they get better as each patient uses Alife products.
These tools will change the nature of the fertility industry. Alife's research shows that their machine learning model can help doctors optimize 50% of the triggering timing and help retrieve an average of three more mature eggs, two fertilized eggs and one more embryo. Alife's products can significantly broaden access to infertility treatments, reducing costs per patient by reducing the dosage of medications required and increasing the success rate of IVF cycles. It would also level the playing field for doctors, giving those who lack first-hand experience access to a wider range of knowledge and information.
Ultimately, you can imagine Alife's tools providing all the information for judgmental moments in a process and allowing practitioners other than physicians to operate, significantly changing the industry's cost structure and availability. What’s more, data-driven precision medicine, which augments (or eventually replaces) a person’s judgment with personalized recommendations, is not unique to the IVF world. There are thousands of moments like this across medicine, where we have the opportunity to use data to dramatically change outcomes and access to critical procedures and treatments.
—Rebecca Kaden, general partner, Union Square Ventures
2. Glean
Enterprise Search
At work, finding exactly the information you need when you need it should be quick and easy. Since everyone uses a lot of applications to get their work done, and as a result generates a lot of data and documents, this isn't always the case. As “knowledge” grows exponentially and the nature of work becomes increasingly distributed, it takes longer and longer to find existing knowledge. In other words, it's quite difficult to "search for stuff" at work.
To help employers solve this problem, Arvind Jain and his team built Glean, an AI-powered unified workplace search platform. It equips employees with an intuitive work assistant that helps them find exactly what they need and proactively discover what they should know.
The company's mission has been simple from the beginning: help people find answers to all their workplace questions faster, with less frustration and wasted time. But the company's results later expanded far beyond search. For example, Glean not only searches all workplace applications and knowledge bases (Slack, Teams, Google Drive, Figma, Dropbox, Coda, etc.), but it also understands natural language and context, based on people’s roles and inside/outside the company Relationships personalize user interactions. It intelligently displays your company's most popular and verified information, helping you discover what your team knows and stay consistent, all in a permissioned manner.
As organizations become more distributed and knowledge becomes more fragmented, intuitive work assistants like Glean are no longer a nice-to-have but a critical tool for improving employee productivity. The company's growth will break down barriers that impede progress and create a more positive, productive work experience.
Additionally, Glean’s search technology enables it to bring generative AI into the workplace while adhering to the strict permissions and data management requirements of the enterprise. Today, one of the main obstacles preventing companies from delivering AI applications into production is their inability to implement appropriate governance controls. By inserting real-time data permissions into an enterprise's on-premises environment, Glean has become the ideal solution to help enterprises solve governance problems at scale and enable enterprises to confidently leverage their internal data for model training and inference, thereby leveraging an enterprise-grade AI data platform /The role of vector storage.
3. Lance
Storage and Management of Multimodal Data
We've all played Midjourney, and most of us have seen a demo of GPT-4. Midjourney (text to image) and GPT-4 (image to text/code) illustrate the possibilities when models become multimodal, bridging between different forms of media such as text, images, and audio. While much of the current AI craze revolves around text-based models, multimodal models are key to building more accurate representations of the world.
As we embark on the next wave of AI applications in industries such as robotics, healthcare, manufacturing, entertainment, and advertising, more and more companies will build on multimodal models. Companies like Runway and Flair.ai are good examples of emerging leaders in their fields that have seen massive user demand for their products, while existing companies like Google have begun releasing similar multimodal capabilities.
However, using multimodal models poses a challenge: how to store and manage the data? Traditional storage formats like Parquet are not optimized for unstructured data, so large language model teams experience slow performance when loading, analyzing, evaluating, and debugging data. Additionally, large language model workflows are more prone to errors in subtle ways due to the lack of a single source of truth. Lance is the latest company to emerge to address this challenge. Companies such as Midjourney and WeRide are converting petabyte-scale data sets into the Lance format, which provides significant performance improvements and an order of magnitude lower incremental storage costs compared to traditional formats such as Parquet and TFRecords.
Lance doesn't stop with storage, they've recognized the need to rebuild their entire data management stack to better align with the world we're moving toward, where unstructured, multimodal data will become an enterprise's most valuable asset . Their first platform product, LanceDB (currently in private beta), provides a seamless embedded experience for developers looking to build multimodal functionality into their applications.
4. Abnormal Security
Containing the wave of AI-enhanced cyberattacks
I am an unabashed optimist when it comes to generative AI, but I am not naive on the subject. For example, I'm concerned about the proliferation of "social engineering" attacks such as spear phishing, which often use email to extract sensitive information. Since ChatGPT became popular last year, the incidence of such attacks has increased dramatically.
In the past year, the number of attacks per 1,000 people has jumped from less than 500 to more than 2,500, according to Abnormal Security. The sophistication of attacks is also rising dramatically. Just as any student can use ChatGPT to write a perfect essay, ChatGPT can also be used to send grammatically perfect, dangerously personalized fraudulent messages.
According to the FBI, such targeted "business email compromise" attacks have caused more than $50 billion in losses since 2013. And it's going to get worse. Every day countless cybercriminals and other bad actors exploit black hat tools like “WormGPT,” a chatbot designed to mine malware data in order to orchestrate the most convincing and large-scale fraud campaigns. to conduct fraudulent activities.
Fortunately, Abnormal co-founders Evan Reiser and Sanjay Jeyakumar are working hard to use artificial intelligence to combat this threat. You can think of this as using AI to fight AI. Historically, email security systems scanned for signatures of known bad behavior, such as specific IP addresses or attempts to access personally identifiable information (PII).
Harnessing the power of artificial intelligence, Abnormal subverts all this. Because many attacks appear legitimate thanks to artificial intelligence, Abnormal's approach is to fully understand known good behavior so that even subtle deviations will be noticed. The company uses large-scale language models to build detailed representations of its digital inner and outer workings, such as who typically talks to each other and what content they are likely to interact around. If my partner Reid Hoffman sent me an email and said, "Hey, please send me the latest information on Inflection.AI." Abnormal's AI engine would quickly find out. , Reed rarely starts with "hey", rarely sends a single sentence, and he has never asked me to send him a file about Inflection.AI. (As a co-founder and board member of the company, he had more access to these documents than I did!).
Not surprisingly, as security concerns around generative AI continue to grow, Abnormal's demand from enterprise customers has accelerated. I think Abnormal's success is very gratifying because it's been able to leverage AI so quickly to address problems that are being accelerated by AI. In periods of disruptive technological change, bad actors often enjoy lengthy first-mover advantages. After all, they can take advantage of innovation without having to worry about product quality, safety or regulators who have yet to enact new laws.
5. Dust
Empower knowledge workers
It is clear that large language models will improve the efficiency of knowledge workers. But it's unclear exactly how that would be done. Dust is trying to figure that out. Knowledge managers are of little help within the enterprise if they cannot access internal data. So Dust built a platform that indexes, embeds, and updates in real time an enterprise's internal data (Notion, Slack, Drive, GitHub) and exposes it to products powered by large language models.
Dust co-founders Gabriel Hubert and Stanislas Polu sold a company to Stripe and worked there for five years. They've seen firsthand how fast-growing companies struggle with scale. They've seen firsthand what's called "information debt," and now they're focused on applying large language models to solve some of the major pain points associated with it. Currently, Dust is exploring the following applications on its platform:
6. Labelbox
Release business data
The “rise of big data” has been going on for more than 20 years, and although companies are continuously ingesting more data than ever before, many companies still struggle to use this data to gain insights from artificial intelligence models. Data processing and interpretation remain the most tedious and expensive parts of the AI process, but also the most important for high-quality results. Even with the increase in pre-trained large language models, companies will still need to focus on using their own proprietary data (across multiple modalities) to create uniquely positioned generative AI to deliver differentiated services and insights. , and improve operational efficiency.
Labelbox solves this challenge by simplifying how businesses feed datasets into AI models. It helps data and machine learning teams find the right data, process and interpret it, push models to applications, and continuously measure and improve performance.
Labelbox’s new platform takes advantage of generative artificial intelligence. Model Foundry allows teams to quickly experiment with AI foundation models from all major closed and open source providers, allowing them to pre-label data and quickly experiment with just a few clicks. This way, they can understand which model performs best on their data. Model Foundry automatically generates detailed performance metrics for each experiment run while versioning the results.
The impact could be far-reaching. Traditionally, humans have spent days completing a simple but time-consuming task, such as classifying an e-commerce listing containing multiple paragraphs of text. With GPT-4, this task can be completed within hours. Model Foundry allows companies to discover these efficient ways for themselves.
This isn't the only example. Early results show that more than 88% of labeling tasks can be accelerated by one or more base models. Labelbox allows anyone to pre-label data with just a few clicks, without the need for coding and entering data into a model. This tool is designed to empower teams to work collaboratively and leverage cross-functional expertise to maintain manual oversight of data quality assurance. This capability democratizes access to artificial intelligence by allowing language model experts and small and medium-sized enterprises to easily evaluate models, enrich data sets, and collaborate to build intelligent applications.
Labelbox is proven to significantly reduce costs and improve model quality for the world's largest companies, including Walmart, Procter & Gamble, Genentech, and Adobe.
7. Runway
New Creative Suite
Artificial intelligence is everywhere and increasingly becoming a commodity. In most cases, companies use AI as chatbots to enrich existing applications. Few AI applications are reinventing product experiences, using the technology to fundamentally change how we interact with products, just like Google’s search engine changed the way we browse the internet, or Instagram changed how we share photos from our phones. The same way. These AI applications require a deep understanding of existing user experience, visionary product thinking, and cutting-edge technology.
Runway is a leading example of a company using applied AI research to reimagine creative experiences and build an entirely new creative suite.
Since October 2022, Runway has developed more than 30 AI "magic tools" covering video, images, 3D and text, serving all aspects of the creative process, from pre-production to post-production. Their client base includes Fortune 500 and Global 2000 companies such as CBS’s The Late Show with Stephen Colbert, New Balance, Harbor Picture Video, Publicis ) and Google. The platform has also been used to edit Oscar-nominated films such as the Hollywood hit Everything Everywhere All at Once.
8. NewLimit
Reshaping cell fate
Cells are the most complex computer systems on Earth. Like computer chips, DNA is composed of basic units that create complex functions. Unlike bit-based codes, atom-based codes are random and hierarchical. One system depends on another, which in turn depends on other physical systems, each affected by heat, acidity, and molecules in the cell's microenvironment.
Despite these interdependencies, the cellular machine code (DNA) can efficiently run different programs. Although your liver cells and skin cells contain the same genome, these cell types look, feel and function differently. Why? Because they are executing different epigenetic programs.
In 2006, Takahashi et al. used a combination of four transcription factor (TF) proteins to reprogram mature cells into stem cells, pioneering the field of epigenetic reprogramming. Transcription factors are proteins that regulate genes, essentially changing the "program" that is running. Takahashi and Yamanaka's discovery led to the creation of induced pluripotent stem cells (iPSCs) and won them the Nobel Prize. Since then, many research groups have begun to apply unique TF combinations to change cellular states, rejuvenate damaged cells, and restore youthful cell phenotypes.
While epigenetic reprogramming is becoming more tractable, it's still no trivial matter. The team had to discern which combination of TFs was effective in transitioning cells from state A to the desired B state. For example, future TF combinations may allow us to convert diseased cells into healthy cells, thereby developing a new class of drugs. We need very large-scale reprogramming screens because the exact combination of TFs is not known for many application areas. There are over 1,500 native human TFs, so a more efficient search method is needed. We believe NewLimit is designing such an approach.
Powered by advances in single-cell sequencing and machine learning technologies, NewLimit is transforming a previously manual discipline into data-driven science. The company has a healthy division of labor between molecular biologists and computational biologists, laying the cultural foundation necessary to build an increasingly efficient closed-loop platform. Combining expertise and multimodal readouts (scRNA-Seq, scATAC-Seq, etc.), NewLimit aims to discover therapeutic remodelers to treat previously intractable diseases.
In each round of experiments, NewLimit uses machine language technology to:
In addition to its outstanding team, technical prowess and ambitious vision, we also admire NewLimit’s pragmatic spirit. While the company has not publicly shared details of its initial business strategy, we believe this approach is creative, reasonably reduces risk, and has the potential to be transformative for humanity. The founding team agrees that platform biotechs may be likened to expensive scientific projects without generating short-term assets. To this end, NewLimit has been transparent and cataloged its technological progress since its inception.
9. Poolside
Basic Artificial Intelligence for Software Development
OpenAI focuses on general artificial intelligence, DeepMind focuses on scientific discovery, and the third fundamental use case of artificial intelligence is understanding and creating software.
GPT-4 is ingrained in the workflows of both experienced and novice developers. But this paradigm shift is still in its infancy. Extrapolating from the past few months, AI-assisted programming will soon become ubiquitous. As this trend develops further, natural language will become the abstract foundation upon which software is built.
Although other companies have released large-scale pure code models like StarCoder, no method has yet come close to the performance of GPT-4. I think this is because a model trained only on code cannot produce strong software development capabilities. That's how I met Poolside. The company was founded by Jason Warner, the former chief technology officer of GitHub, and Eiso Kant, the former founder of source{d}, the world's first research code artificial intelligence company. Smart company.
Poolside is unique in that they take the OpenAI base model approach but focus on only one function: code generation. Their technology strategy hinges on the fact that code can be executed, allowing for immediate and automatic feedback during the learning process. This enables reinforcement learning through code execution, a compelling alternative to reinforcement learning based on human feedback (RLHF). This is something Esso began exploring as early as 2017.
10. Mistral
OpenAI competitors in France
Recently, Paris has been illuminated by an explosion of projects in the field of generative artificial intelligence. Maybe you will ask why? My thought is that Paris has the largest pool of world-class talent in generative AI that is outside OpenAI's event horizon. Of these projects, the boldest is undoubtedly Mistral. Mistral was founded by Guillaume Lample, Arthur Mensch and Timothe Lacroix with a mission to build the best open source language models. The goal is to build a thriving ecosystem around these models.
I have known Guillaume for four years, and we have both been deeply involved in applying large language models to areas of mathematics, especially formal mathematics. While working at OpenAI and Meta, we developed a friendly competitive relationship. Guillaume is one of the most talented researchers I have ever had the pleasure to work with, and I had the privilege of watching him go from research at Meta to founding Mistral. In the process, I also met Arthur Mensch. I have always been impressed by his work, especially Chinchilla, which redefined what it means to efficiently train large language models, and RETRO, an approach to retrieval-enhanced language modeling that, I would say, is still Not fully explored.
Now, let's dig into what makes Mistral Mistral. The startup’s vision is to build an ecosystem based on a best-in-class open source model. This ecosystem will serve as a launching pad for projects, teams, and companies, accelerating the pace of innovation and creative use of large language models.
Take reinforcement learning based on human feedback (RLHF) as an example. Typically, performing RLHF is time-consuming and therefore costly. It involves manual "flagging" of AI actions, which can require a lot of work. The effort will only be worthwhile if the promise of an AI model is good enough. For a large enterprise like OpenAI, investing in this process makes sense, and the company has the resources to make it happen. But traditional open source communities usually need a "leader" to step forward and take on this important responsibility.
Mistral has the opportunity to do just that, investing in an open source model for RLHF. In doing so, Mistral will open the door to a Cambrian explosion of innovation. Open source developers will have access to clearly labeled models that they can adapt and customize for different needs. The ultimate winner will be the broader market, and we will have access to more specific and compelling use cases than one closed company could produce alone.
Whoever has the best open source model will attract more interest and value. I'm bullish on Mistral because the team is actively pushing the efficiency/performance frontier. At the same time, Mistral’s talent in this area is by far the best in the world.
11. Sereact
Smarter Industrial Robots
We often hear predictions that in the long term, artificial intelligence and robotics will augment or automate human tasks. Today, this has increasingly become an urgent business imperative.
By 2030, Europe's working-age population is expected to decrease by 13.5 million, and labor costs are rising at the fastest rate in more than 20 years. With the rise of e-commerce, warehouses are under more pressure than ever and it is becoming increasingly challenging for businesses to stay competitive.
55% of warehouse operating expenses come from order picking, but the situation is not optimistic for companies looking to move to automated systems. None of the flashy applications we’re familiar with in AI-led SaaS (software as a service), or the plethora of open source products we see in other parts of the ecosystem, have yet to be applied to robotics.
Instead, businesses looking to automate picking and packing are faced with choosing expensive, inflexible robotic solutions. They must navigate a host of proprietary interfaces that require significant programming time and expertise. These systems also struggle to cope with changing product mixes, require regular human intervention, and perform poorly when handling extreme situations.
Secret solves these problems. Its software is based on powerful simulated environments, training robotic arms to understand the spatial and physical nuances of any potential real-world environment. Once deployed, the system will be optimized by continuously learning from real-world data. It also means they can handle the challenge of grabbing traditionally difficult items such as electronics, textiles, fruit, tiles and wood.
Most excitingly, their robotics stack uses large language models to enable intuitive natural language control of robots. They developed a converter model called "PickGPT" that allows users to give instructions and feedback to the robot via voice or text. This way, anyone can ask the robot to perform a desired task, regardless of their level of technical knowledge.
Secret combines the two areas of expertise of its co-founders. CEO Ralf Gulde has worked at the intersection of artificial intelligence and robotics, while CTO Marc Tusher specializes in deep learning. The pair conducted peer-reviewed research in these subjects at the University of Stuttgart, one of Germany's most prestigious universities for automation and industrial manufacturing.
Despite being a young company, Sereact has already attracted an impressive list of partners, including Daimler Truck, Schmalz, Zenfulfillment, Zimmer Group ) and Material Bank. This indicates that there is a huge potential market opportunity in the picking and packing industry.
Beyond the obvious use cases in e-commerce warehouses, whether picking orders or unpacking boxes, there are a range of other use cases. For example, in traditional manufacturing, there is a time-consuming process called assembly, which involves laboriously collecting the delicate parts required for assembly. Robotic arms have historically struggled to grasp small parts and sort out individual parts in cluttered environments. Sereact's software can identify these parts and select the correct gripper to pick them out.
12. Lamini
Tailor-made large-scale language model engine
Now, every company is trying to integrate artificial intelligence into the company's business. The world's largest companies recognize the potential of artificial intelligence, with 20% of CEOs in the S&P 500 mentioning AI during their first-quarter earnings calls. Large language models can significantly improve business efficiency by accelerating core functions such as customer support, outbound sales, and coding. Large language models can also improve core product experiences by answering customer questions with AI-based assistants, or create new generative AI workflows to delight customers.
Given that large companies tend to be slow to adopt new technologies, we were surprised at how quickly enterprises started building with AI. Not surprisingly, many businesses want to build their own AI models and solutions in-house. Every business has a proprietary trove of customer data, often as part of its core business moat. These businesses see risks in sending their most valuable data to underlying model APIs or startups whose reliability is uncertain. Even regardless of data privacy issues, public large-scale language models such as GPT-4 or Claude are trained entirely on open data and therefore lack customization capabilities for enterprise-specific use cases and customer segments.
Some technology companies, such as Shopify and Canva, have formed internal "AI Tiger Teams" to use ready-made open source models to integrate artificial intelligence into all parts of the business. However, most companies do not have the resources or experienced AI researchers to build and deploy proprietary large-scale language models based on their own data. They realize that this wave of AI could be a transformational moment for the future of their business, but so far have not been able to leverage or control their own AI development.
That's why we're so excited about what Sharon Zhou and Greg Diamos and their team are doing at Lamini. Lamini is a large-scale language model engine that makes it easy for developers to quickly train, fine-tune, deploy, and improve their own models with human feedback. This tool provides an enjoyable development experience that abstracts away the complexities of AI models and, more importantly, allows enterprises to build AI solutions on top of their own data without having to hire AI researchers. Or risk data leakage. We worked with Sharon and Greg for the first time last fall. Since then, we’ve had the opportunity to support this technically proficient, customer-focused founding team as they realize their ambitious vision to transform the way businesses adopt AI.
Specifically, deploying private large language models with Lamini offers a wide range of advantages compared to using public solutions. Having an in-house engineering team handle the build process ensures data privacy and allows for greater flexibility in model selection and the entire compute and data stack. Models made using Lamini also reduce artifacts, reduce latency, ensure reliable runtimes and lower costs compared to off-the-shelf APIs. These performance enhancements come from core technical insights that the Lamini team builds into the product based on decades of research and industry experience around AI models and GPU optimization.
13. Factory
Your Coding “Robot”
Today, if you want a computer to do something for you, you have to translate your thoughts into "computer language," a hypertext code that a compiler can understand. To become an engineer, you have to twist your brain like a machine. However, we are reaching a tipping point where AI can turn human language into code. The transition from human engineers to digital engineers is likely to become one of the most important technological inflection points in our lives.
We are still in the early stages of this transformation. Artificial intelligence tools like BabyAGI and AutoGPT have captured the public imagination. But while coding assistants like Github Copilot represent an improvement, they're still very limited, serving mostly as auto-completion for ideas already implemented in code.
Factory is different. The company was founded in 2023 by former string theorist Matan Grinberg and machine learning engineer Eno Reyes. When I met Mattan, I was immediately drawn to his vision: a future where engineers can make building things fun by delegating annoying tasks and focusing on tough problems. To do this, Matan and Eno created autonomous coding "robots."
Bots are artificial intelligence engineers that handle daily tasks such as code review, debugging, and refactoring. Unlike existing products, Factory's bots don't require you to do anything; they can independently review code, handle errors, and answer questions. You can also use bots like junior developers, using them to brainstorm and share feature work. Robots have powerful protection mechanisms, and their intelligence is targeted at user needs, making it difficult for them to "hallucinate" wrong answers.
Code generation will be one of the most transformative areas of the AI revolution, and Factory has all the necessary tools to succeed.
*Team. Mattan, the CEO of Factory, is a string theorist at Princeton University where he imagined black hole singularities. Eno worked as a machine learning engineer at Hugging Face and personally handled the tedious engineering process. This is a unique team.
The story of human development is one of offloading repetitive tasks, allowing us to move onto more complex tasks. When humans invented agriculture, they essentially unleashed our ability to build cities. After the Industrial Revolution, we built rockets that took humans to the moon. The next generation is on a mission to liberate humans from online drudgery and push the technological frontier further.
Translator: Jane