Every few months, some question rises to the surface as often repeated in conversation. If it's something I know a little about I'll contribute, but more often than not I try and listen, understand patterns, examples - what's worked, what hasn't.
Eventually I find myself writing one of these - about model selection, using connections in RAG systems, defensibility as an AI company, prompting or product management.
Today it's about hiring in AI. In the last two weeks I've had over twenty conversations - all the way from companies directly working in this wave of AI, companies building AI-enabled products, to more traditional firms that want to improve operations and see what can be changed.
The problems are always the same:
These are worthwhile problems to solve. Of course in every cycle, there is a point at which hype outpaces value. We might already be there, but I'm now firmly of the opinion that this AI summer holds quite a bit of value under the hype.
Modern LLMs (and LLM-powered tools) have been the single greatest force multiplier I've seen in my time, with the widest application surface, perhaps only rivalled by Spreadsheets and code hints. Every single company I know could use at least some knowledge and expertise of modern AI - either at the tool usage level or the engineering level.
Before we get into it, I want to point out a few caveats:
If this helped, consider following me on Twitter. I'm not much for email captures and newsletters, and I usually post shorter things and new articles there.
This has become a tough question because of a few facts. Some of these took me a while to really get used to.
Once you get used to them, you can start to form a few guidelines for what makes a reasonable applicant. Let's see if we can't use a points system here.
If they haven't built something involving AI before, -100 points from Gryffindor.
If they're built something simple, +10 points.
If they've built something that they (or someone they know) is a daily active for, +100 points.
(I want to reiterate that this guide is entirely about hiring for AI engineering positions or teams. Please don't judge sysadmins or petrochemical engineers by these metrics.)
If they know the basics of string search, string distance, etc, +10 points.
If they can spin up elastic or do some basic BM25, +50 additional points.
If they have a working understanding of embeddings and what they should not be used for, +50 points.
If they've used LMStudio or ollama to run a model from Huggingface, +10 points.
If they can outline idiosyncracies between models, +20 additional.
If they want to talk about model architectures and how they're different, +20 additional.
If they know about quantization, and how to pick a model size, +20 additional points!
The best AI engineers I've found have not always been engineers. Some have been CEOs, product people, some have only been learning how to code for 2 months. What has been crazy is that the percentage of good ideas I've seen heard and implemented has not leant heavily towards experience with engineering.
The previous section should hopefully serve as a guide to thin out your pipeline. Once you're past that point - and you'll notice almost everything in there can be done in a few days as a candidate if you're willing to spend the time - there isn't a replacement for real projects.
If a candidate has real projects under their belt, you should absolutely be using those instead of what I'm suggesting below. However, I've encountered many engineers without public projects who've gone on to build some truly amazing things. In this case, take-homes are a good idea: if you can limit the time to a few hours.
Here's what I recommend. Assemble a pipeline of 3-6 projects, none of which should take more than a week - I'll provide some examples below. Line them up by difficulty. Place one of them in the hiring pipeline, and the rest in the probation/onboarding side of things.
The project in the pipeline should ideally be time-limited to a few hours. My template for doing this has been to suggest starting with an architectural overview, early proof of concept, and to effectively stop at the time limit and judge how far we've gotten.
For the projects on the onboarding side of things, what's interesting is you can build a library of past work on the same problem. Once (someone) completes each one, you're able to see the approach everyone else took, compare and contrast and discuss. Ideally the projects encourage learning and evaluation from both sides of the table - and it's a bonus if the result is something that can be useful to the individual or the firm.
If you set it up right, you have a chance to create an atmosphere of continuous learning. Even the same project repeated every month for the last year would have arrived at completely different methods and solutions - I know.
Here are some projects/take-homes you're free to use. These are also great if you're looking for projects to build as a way to learn AI engineering:
What's also really interesting is how much the floor for development has changed in the past year. Any of these (especially 3.) would have been projects that took me more than a week to get to a proof-of-concept, but today just pasting in this blog post and asking for an answer gave me a working prototype from 3.5 Sonnet in a single shot!
My favorite tasks - like my favorite video games - are easy to do and hard to master. Of course, you can skip the take-home in the pipeline if they have enough projects under their belt - we're simply looking for how comfortable someone is with the business end of a modern AI model.
The one thing that's helped me the most is writing. The more you can talk about yourself, your firm and what you want to be doing, the easier it becomes to attract an almost passive incoming stream of good people who agree or feel the same way.
If you manage a team pointed into, near or at AI, the biggest worry you'll have is churn. The floor has reset upwards, especially with the influx in capital, however long that lasts. It seems often that a single engineer with the right tools and flexibility in learning can get to 1M ARR in a year just on their own.
My best path to countering this has been to have an environment of learning and growth. The one massive negative to a tiny company of one (or two) is that many minds can learn faster than one, with the right culture. If you can make yourself and your team feel like they can move forward faster together than they can alone, you might just stop a leak.
Hope that helps!
This guide is the combination of a set of conversations and lived experiences around shared concerns, but it's still one man's opinion. People are the most diverse thing we have, which makes anecdotal advice rarely exactly right. If you have thoughts/disagreements/other concerns, I'd love to know!