some of my thoughts and notes

Chain-of-Thought Prompting to Model Chaining

We are all pretty used to the results of Chain-of-Thought reasoning at this point, without even having learned how to prompt models to do so, because all the frontier models engage in it automatically.

But what surprised even me yesterday evening was the intense untapped power of Model Chaining. Let me explain:

I had a long chat with Claude about how to potentially tackle our manufactured mental health crisis, and we were brainstorming a software solution that would cost at least 500.000€ just in clinical trials, and I thought:

The best case if someone already did this, so that I don't have to do it.
The second best case if someone is working on it, and I can simply help them.
The worst case is if nobody has started working on it yet and I need to find out why.

And probably I would find out that it's harder that I thought and decide to not even start.

Instead of doing the research myself over the course of two weeks, as Claude suggested, I asked it to come up with a research plan for Perplexity. I then copy-pasted that query into Perplexity, and after 3 minutes, it turned up a comprehensive summary, answering my question in plenty of detail.

Being aware of that superpower, I immediately went on to configure Claude Desktop with a Perplexity API Key. It can now use Perplexity directly.

But it's actually not more useful than just copy-pasting between Claude and Perplexity because it introduces new challenges.

What's also useful, in turn, is to talk with Claude about a project in Lovable.dev, and then copy-paste a prompt from Claude to Lovable. The results of a single prompt can be quite staggering this way.