Raven — My Self‑Hosted Media Server AI

OC is not banned by anthropic if you use it with API (super expensive though). But to answer your question directly, yes, it does work with OpenAI, and not only OpenAI. It works with at least 15 more AI providers out there, including KIMI 2.5, which is the best next thing after OpenAI.
Any local. LLM models that oc can be redirected to?
 
Effectively banned for hobbyists like us.
They want to make sure their discounted (subscription) token capacity is available for the products which is part of their growth plan and business priorities. I always knew this would happen. Why would they subsidise something which is of no use to them. To be fair they did two things - [1] They provided free credits (equal to your subscription value) which you can use for a month before you decide how you are going to proceed. [2] Allow API access which means people who are using it for any critical use can continue using them though at a higher cost.

Regards,
Arun
 
Any local. LLM models that oc can be redirected to?

It can. But the official documentation says this:

"Local is doable, but OpenClaw expects large context + strong defenses against prompt injection. Small cards truncate context and leak safety. Aim high: ≥2 maxed-out Mac Studios or equivalent GPU rig (~$30k+). A single 24 GB GPU works only for lighter prompts with higher latency. Use the largest / full-size model variant you can run; aggressively quantized or “small” checkpoints raise prompt-injection risk (see Security)."

If you have $30k to spend on this, you can pay for API access to Opus itself :D

I do not have experience with OpenClaw + Local models. But have lots of experience with LLMs. They are the core of my startup. I did try OpenClaw and I think it is amazing. But I am not using it till the security issues associated with it sorted out. Its probably okay for what the OP is doing, but for me the power of OpenClaw comes with its ability to do my chores like managing email, calendar, customer support, market research etc. I am not going to give it access to these high risk ops for now, let alone my passwords and credit card details.

Even the tiniest local LLM runs at an excruciatingly low speed on CPUs. You need GPUs for it to be effective for any process which is not interactive. OpenClaw is interactive and not a batch process mostly. GPUs are expensive. Also the capability of small open source LLMs (which can run on consumer grade GPUs at home) are ridiculously low compared to the frontier models provided by Anthropic/OpenAI/Google. Local models can and will work. Its just that the user should be willing to accept the degraded performance both functionally and speed wise. And most importantly small local LLMs do not have any kind of guard rails against prompt injection. This *is* a huge security risk if you have given OpenClaw substantial access to your personal accounts.

There are some serious efforts to make OpenClaw like systems run in a secure way. We need to wait a few months. The scenario will be very different. But till then I am going to wait and watch from the sidelines.

One of the efforts is from the Nvidia : NemoClaw : https://www.nvidia.com/en-us/ai/nemoclaw/

Regards,
Arun
 
Wharfedale Linton Heritage Speakers in Walnut finish at a Special Offer Price. BUY now before the price increase.
Back
Top