in Uncategorized

The Great AI Science Acceleration

In my last post of 2025 I wrote about my plans for the new year and said I would go all-in on AI. That’s what I did, and last week I shared that Sean and I had started biotech company Helixion Therapeutics and built a model to rapidly generate personalized cancer vaccines. If you had told me three months ago we would have a working cancer vaccine model by the end of March, I would have said that’s impossible. Well, it turns out it is very possible. It’s not perfect yet, for sure, but it’s unbelievable how fast AI has developed over just the past couple of weeks.

I have a degree in economics, so I have no medical or a technical/science background (although I read a LOT of scientific articles), and I’m certainly not a PhD. Despite that, Sean and I managed to build this in an incredibly short time without spending large amounts of money. That alone should tell you something about where we are with AI and science.

The power of cross-pollination

Over the past couple of months, and especially the past couple of weeks while working on the Helixion model, my mind got flooded with new ideas related to AI, genetics, medicine, and DNA. We’re focused on personalised cancer vaccines right now, but I believe we can eventually make this much broader (Sean will try to stop me when he reads this, he is much better at focusing, and he is right of course, but I like to dream).

Drug repurposing, for example, is low-hanging fruit. You can build AI models that look at the existing literature (and not just Western like most people do) and come up with drug applications that people haven’t considered. It’s really not that hard if you understand how AI works and if you know what to look for in the literature. This is not a new idea, I know others are working on this as well, but I am not sure if they are AI-first as well and if they are taking the same approach.

Here’s what I think is happening: a PhD goes incredibly deep into one field. That depth is valuable, but AI changes the equation. You no longer need to go as deep into a single field. It’s now much better to go fairly deep into several fields and then find the overlaps. That’s where the novel ideas live and where new science will be created. AI can help you with that, because AI is a PhD in every field and can explain it to you like you’re 5 years old (try it, it’s fun if you prompt it to do that sometimes).

I didn’t know a thing about biology, genetics, cancer vaccines, proteins, peptides, or neoantigens a couple of months ago. I didn’t know how mRNA vaccines work (except for the basics) or how you create them. I’m certainly no expert yet and won’t become one, but I now know enough to come up with new ideas. Maybe even faster than some PhDs, because I connect dots across fields instead of digging deeper into one.

I’ve done this before: The FBC Bitcoin Trust (that became a publicly listed Canadian Bitcoin ETF years before the first US one was listed) came from combining knowledge of traditional finance and Bitcoin back in 2017, when nobody was thinking about it. Bitcoin mining was similar: two different fields, which allowed us to raise money for Bitcoin mining in public markets with Hut 8. This meant we could keep (‘HODL’) our mined Bitcoin, instead of selling it to pay for service and power. At the time that was revolutionary. AI makes this kind of cross-pollination dramatically easier and faster.

AI as the ultimate research partner

One reason we were successful with our cancer vaccine model is that AI helped us find the right datasets fast. Some were license-only or not for commercial use, so we asked AI to find us free alternatives (we are financing this ourselves for now). If they were available AI would eventually find them, reformat the data, and get it into our model. These things literally weren’t possible until even a few months ago.

AI finds every paper you’re looking for and it explains or summarises these papers for you. It’s patient, always on, always enthusiastic. You give it a huge task and it’s like an eager intern who genuinely wants to do the work (“Yes, good idea!”). I make mistakes constantly, and I ask AI to explain basic things to me over and over. It doesn’t care, it just does it over and over again without judging me.

I have four different AI instances open on my laptop at all times (ChatGPT, Claude, Gemini, Grok) and I go from one to the other, testing what one tells me by discussing it with another. I have preferences for certain tasks and I know each model’s strengths and weaknesses pretty well by now.

What universities should understand (or learn)

This brings me to a harder point: AI makes me realise how slow science is. Scientists are great people, don’t get me wrong, but they’re not entrepreneurs. Their speed is different from our speed. In medicine the traditional path of Phase 1, 2, and 3 trials is a real bottleneck. We need to come up with alternatives, and maybe AI can help us get there.

I was recently looking at my master’s thesis that I wrote full-time during a couple of months. I believe that anybody with good knowledge of Claude Cowork could now do the whole process (defining a problem, collecting the data, building spreadsheets and models, running statistical tests on them, drawing conclusions, and then writing 40 pages about it full of graphs) in a weekend, maybe even in one day. Claude comes up with ideas, finds the data for you (that took me weeks in 1995), formats it, runs regressions on it and comes up with conclusions. I don’t know if universities realise it, but I can’t imagine that smart students who are AI-native spend more than a few days on writing a full master’s thesis.

If you take that a step further, I think that one good AI-first PhD can now do the work of 10, maybe even 20 PhDs who just use AI as a more expensive version of Google. That’s a huge opportunity if you’re a PhD or researcher, but it means PhDs need to embrace AI fully, right now. If they don’t spend serious time learning what it can do, they will be left behind by people who understand AI and can work with it.

If I’d run a university right now I would completely change its curriculum. Become AI-first before AI takes over the world right under your nose. This doesn’t take much time, because you can do that with AI as well! It’s actually an easy and a fun exercise: feed the syllabus and the required reading into Claude and let it design a 6- to 9-week course for each subject fully built around AI, or as a next step even a personalised course for every student. Maybe even more important, force your PhDs and professors to become AI-first, and literally check every week how many tokens they used. I can guarantee you that the number of publications would skyrocket.

The acceleration math

2025 was the year of AI coding and vibe coding, experienced developers coding together with AI, or noobs like me coding apps that they couldn’t create just a few months earlier. The best coders don’t code anymore, they just give AI the direction and check the output.

2026 is the year of agentic AI. Multiple AI agents running tasks for you simultaneously, while giving tasks to subagents. As an example, you could literally ask an AI agent to redesign university courses and make the AI-first, give it all the information and the next morning you have a new curriculum. If you haven’t tried agents yet, do it. I can guarantee you it’s life-changing, you will suddenly understand where this is all going.

Here’s how I see the acceleration: this year, we will get scientific breakthroughs that would normally take five years. Next year, that doubles, we’ll achieve in one year what would otherwise take ten. By the end of 2027, we could be where we would normally have been in 2040. That’s how fast this is going.

Coming back to medicine, I strongly believe that within a few years we can solve most diseases. I’m now starting to think we will be able to turn aging around, so we will actually age backwards, which means we could eventually live far longer than anyone currently imagines. The science is pointing in that direction, and AI is the accelerant.

The double-edged sword

But it’s not all bright, because most people will lose their jobs in the coming years. For them, this will be terrifying. The polarization we already see in the world will only get worse with AI. We may face serious problems (economic disruption, social unrest) that are genuinely hard to solve.

Unfortunately our governments still live in the old world, they don’t see the tsunami that will hit us. The old science world too (universities, research centers, regulatory bodies) still want to use frameworks designed for a much slower pace of discovery. That needs to change if we want to capture the full potential of what AI makes possible.

Where this is going

The future is scary but will eventually be bright, unless AI kills us (which is a real possibility, but that’s for another time). I am convinced that science will see a great acceleration starting this year already and it’s only speeding up. Within two years we’ll be at a level where we would normally be by 2040. AI companies will soon (this year?) release specialised versions for law, accounting, medicine. These will be tools that don’t just assist but actually replace entire functions with full accountability. I now realise that these AI companies will become the biggest companies in the world before the end of the decade, they will simply take over the world.

What we are seeing now is just the beginning, things will only go faster from here. Don’t get left behind.

Write a Comment

Comment