Exploring Stardock's Clairvoyance
technology
A man joyfully adjusts settings on a futuristic interface labeled Analyzing, exploring the possibilities of Stardock's Clairvoyance technology in a cozy, modern workspace.
I have been trying out a new app from Stardock called Clairvoyance. It is a nice AI interface that is very powerful and lets you use API keys to interact with large language models and do all kinds of things. It also takes a privacy-first approach and is a bit more user friendly than some other tools I have tried. I will say this has been very compelling and interesting to use. I honestly cannot believe it is free, and I hope it stays free. That said, there are still some bugs and rough edges. For example, it does not always make it clear whether it is actively working or if something has hung. When subagents get stuck there does not seem to be an obvious way to stop or cancel the task, at least that I have found so far. I have also seen cases where the interface says it is finished but still reports that it is working. On occasion I have asked it what it is doing or what is going on, and the response mentions something running in the background but does not really answer the question directly. It almost feels like it is ignoring the question and returning a vague or nonsense response instead of explaining what is happening. So there are definitely some areas that still need to be fleshed out. To be fair, the software is clearly marked as being in alpha. When it is on target and working, especially when it is helping with coding, it actually works quite well.
What makes it interesting is that it is not just another chatbot interface. The goal is giving people a way to orchestrate multiple AI models and agents from one place. From what I can tell, Clairvoyance makes working with AI tools easier by putting everything into a single desktop interface.
Some of the things Clairvoyance focuses on include:
- Connecting to multiple AI providers using your own API keys
- Supporting different large language models instead of locking you into one system
- Creating agents and subagents that can perform tasks and report results
- Allowing tasks to be broken down and delegated between agents
- Providing a desktop interface instead of relying on multiple browser tabs
- Taking a privacy-first approach so users can control which models and services they connect to
I find it funny that the best way for LLMs, and for people working with them, mirrors a corporate structure. AI tools have really embraced the organizational chart. Think about it. I have seen several people I follow who build or use coding tools do this, including projects around Nostr such as work shared by William Casarin, also known as jb55 (jb55.com). The Nostr community is embracing and experimenting with tools like this. The idea is that you recreate a functioning workplace. You are the boss. Then you have supervisors or agents. Those agents can have their own subagents or employees. They assign tasks, check the work, and then present the results back to you. It is basically the corporate structure recreated in software. Isn't it hilarious that even AI ends up with a corporate structure.
I do wonder how long it will be before we see the slacker AI agent or the eager beaver employee agent. Say what you will, but in my lifetime I have often felt like machines sometimes have a life of their own. Even when they are manufactured the same way, certain machines develop little quirks. If you treat them right, or simply let them do what they were designed to do, they tend to work the way they should.
This whole AI scene is exploding. I am reading more and more posts from people who are using them, people who refuse to use them, people who hate them for what they represent, and people who say they are making the climate crisis worse. My view is that the situation is more complicated than that. Unprovoked war dwarfs AI's perceived climate effects more than datacenters. Why worry about climate effects from datacenters when we have real, visible climate disasters that are more human-purposefully destructive? But I digress.
I recently read a compelling blog post by Tom Casavant called Musings on AI. If you do not follow him in your RSS reader or on social media, you probably should. He has views on AI that are similar to mine, which is neutral. He recently dove into Meshtastic with "Mastastic," a cool offline Mastodon client over mesh networks (1-mile range in tests). I have been eyeing Meshtastic and Meshcore myself for a hobby project. He also appears to be doing the internet a service by responsibly reporting issues he discovers on websites so they can be fixed in an ethical way.
I like different views and takes about AI. I read and listen to people who fawn over AI like Leo Laporte, William Casarin, and Vitor Pamplona, more neutral views like Tom Casavant and Paul Thurrott and Robert Campbell, to the AI is not a great business and an implosion is coming Ed Zitron. This gives me a full round view of AI and shapes what I do and how I use it. AI is not human; it is a tool, a very useful and democratizing one with real limitations. While this post seems to have chased rabbits and probably did. I just wanted to let you know about Clairvoyance, my thoughts on it and AI in
general using this tool and how other people's thoughts on AI in general shape my views and usage. AI is history in the making. We are getting closer to my vision of how we can get a Star Trek Style computer. With tools like Clairvoyance paving the way with the front end and taking the technicality out of AI we will be there in the not too distant future. Then who knows what, replicators or a holodeck or maybe
both.
Links may be shortened via mtribe.link for cleaner formatting. All links redirect to their original destinations.