Revenue and Pipeline for Private GenAI
National data regulation, repatriation of data, and return to the private cloud is combining with the demand to stay competitive with GenAI to drive interesting pipeline for our Private GenAI business
Since we launched our 1.0 in September, declaring our Private GenAI Stack production-ready, interesting things have continued to happen commercially:
Diego Ciangottini who works with CERN figured out how to run Helix on SLURM to take advantage of thousands of GPUs:
Helix was deployed into one of the largest telcos in Europe, who are using it as a RAG, integration and testing layer between their internal Llama3 and their business applications.
One of the largest banks in the Arabian Gulf reached out and told us that our mix of Private GenAI, OpenShift and NVIDIA support, plus integration support with business systems was a perfect fit for their needs, and that what we built was radically simpler than the software stack NVIDIA are offering.
A contact who works with one of the largest telcos in East Asia reached out and told us that they have thousands of GPUs on SLURM they’re trying to convert into a locally hosted OpenAI alternative and that Helix is a great fit.
Meaningful market in private GenAI
All of these signals are telling us that there is a meaningful market for Private GenAI solutions in places where there are good reasons not to send your data to OpenAI or Anthropic.
Geographically, this suggests that we will find our market wherever there is regulation to stop data leaving the country or the continent.
Industry-wise, financial services, telcos and healthcare all have good reasons to keep their data truly private.
From a function perspective, we’re seeing interest from legal & compliance, engineering & DevOps and HR.
Focus on simplicity
The other commercial strategy we’ve found ourselves shifting towards naturally is to simplify things that are otherwise complex.
As I argue in We can all be AI engineers, the tooling is good enough now that you don’t have to have a PhD in Machine Learning to put together GenAI that integrates with knowledge or business systems via APIs.
And with open source models, you can run it all locally.
Case studies
German natural language apps
We have continued successes with our lighthouse customer AWA, where they are driving use cases around evals and CI/CD (described in more detail in the product post):
"At AWA, we're developing Helix Apps to help our clients drive revenue, such as scaling their sales teams with natural language interface to product catalogs and other APIs and knowledge sources. We've been pioneering evals (tests) for our Helix Apps for several months. The evals are essential to ensure that changes we make to the model, the prompting, the knowledge or the API integrations make things better and don't regress expected functionality. You wouldn't ship software without tests, and you shouldn't ship AI Apps without evals! We're glad that Helix finally added evals (CI) and deployment (CD) support to the product, they've been very fast in developing the features we needed!"
– Melike Cankaya, Software Engineer at AWA Network
Marketing survey data
Another customer of ours, Matt Gibbs at BayesPrice, spoke at the AI for the Rest of Us conference:
“We have been working with a company called Helix. They are a bootstrapped company enable companies like mine to get open source models into our own stack – our own Azure space - they’ve given us technology to apply a large language model to analyze survey data in a safe space.”
– Matt Gibbs, CEO at Bayes Price
Check out the talk from 16:00 onwards for a demo of what Matt is building with us to revolutionize extracting insights from marketing survey data with his Helix App!
Mega-trends: repatriation of data at national and corporate levels
So we have found ourselves surfing two mega-trends.
Firstly, countries are increasing national and regional regulation which puts controls on end-user data leaving the country. This repatriation of data within countries will lead to more people needing to run GenAI in-country and that will fuel demand for solutions to replicate what OpenAI can do on your own infrastructure. Our goal is to be well-positioned to benefit from that, just like our friends at Oxide Computer.
Secondly, there’s the shift back to private cloud for enterprises:
Michael Dell is biased, of course, but this repatriation of infrastructure in enterprises will also drive the need for local solutions. Because “cloud” now includes OpenAI-compatible chat completions, fine tuning APIs, and the DevOps tooling to manage and deploy your GenAI applications, “private cloud” will need to supply the same. And that’s where running Helix on your K8s cluster comes in.
We are playing a different game
I’ve thought about this for a while: we’re playing a totally different game to the likes of OpenAI and Anthropic. There’s no way that we’re ever going to train our own LLM, we don’t have hundreds of millions of dollars to spend on GPUs.
Therefore, we’re not interested in gathering as much training data as we can. Helix doesn’t want your data. Open source GenAI is good enough that we don’t need to.
Instead, our aspirations are simply to provide the best software you can run on your own infrastructure to make it easy to get all the benefits from GenAI without sacrificing control, security & availability to a hyperscaler.
Give it a try, we really did make it as easy as:
curl -sL -O https://get.helix.ml/install.sh && bash install.sh
Want to talk? Email us: founders@helix.ml
Resources
Docs for helix test in Helix 1.4 release: github.com/helixml/helix/blob/main/examples/test
Luke’s AI for the Rest of Us talk introducing and live demoing helix test
:
CI/CD reference architecture: https://github.com/helixml/genai-cicd-ref and complete demo:
We’d love your feedback: Join us on Discord!