It's a single HTML file - Claude wrote it and I iterated on the layout. A daily cron job checks the changelog and updates the sheet automatically, tagging new features with a "NEW" badge.
Auto-detects Mac/Windows for the right shortcuts. Shows current Claude Code version and a dismissable changelog of recent changes at the top.
It will always be lightweight, free, no signup required: https://cc.storyfox.cz
Ctrl+P to print. Works on mobile too.
There’s something funny about this statement on a description of a key bind cheat sheet. I can’t seem to find ctrl on my phone and I think it may be cmd+p on mac.
> Ctrl-F "h"
> 0 results found
Interesting set of shortcuts and slash commands.
The fact users deal with almost everything being objectively not very good if not outright bad is a testament to people adapting to bad circumstances more than anything.
On Mac it's the same as Windows, CTRL + V.
You use CMD + V to paste text.
After 43 iterations, it can turn any website using any transport (WebSocket, GraphQL, gRPC-Web, SSE, JSON API (XHR), Encoded API (base64, protobuf, msgpack, binary), Embedded JSON, SSR, HLS/Media, Hybrid) into a typed JSON API in about 10 - 30 minutes.
Next I'm going to set it loose on 263 GB database of every stock quote and options trade in the past 4 years. I bet it achieves successful trading strategies.
Claude Code will be the first to AGI.
I use TimescaleDB which is fast with the compression. People say there are better but I don’t think I can fit another year of data on my disk drive either or
Where'd you get the data itself? You sense I suppose everyone's skepticism here.
I don't understand your question? Are you saying the source of the data I linked to is corrupt or lies? Should I be concerned they are selling me false data?
But they are in fact selling the actual data! https://massive.com/pricing
Options quotes alone for US equities (or things that trades as such, like ADS/ADR) represent 40 Gbit per second during options trading hours. There are more than 60 million trades (not quotes, only trades) per day. As the stock market is opened approx 250 days per year (a bit more), that's more than 60 billion actual options trades in 4 years. If we're talking about quotation for options, you can add several orders of magnitude to these numbers.
And I only mentioned options. How do you store "every stock quote and options trade in the past 4 years" in 263 GB!?
I think this would be pretty straightforward for Parquet with ZSTD compression and some smart ordering/partitioning strategies.
how do you have it build a "trading strategy"? it's like asking it to draw you the "best picture".
it will ask you so many questions you end up building the thing yourself.
if you do get something, given that you didn't write it and might not understand how to interpret the data its using - how will you know whether it's trading alpha or trading risk?
I can care less about scraping and web automation and I will likely never use that application.
I am interested in solving a certain class of problems and getting Claude to build a proxy API for any website is very similar to getting Claude to find alpha. That loop starts with Claude finding academic research, recreating it, doing statistical analysis, refining, the agent updating itself, and iterate.
Claude building proxy JSON api for any website and building trading strategies is the same problem with the same class of bugs.
Now what is important is developing techniques for detecting patterns as this can applied to research, science, and medicine.
If you find such a db with options, it will find "successful trading strategies". It will employ overnight gapping, momentum fades, it will try various option deltas likely to work. Maybe it will find something that reduces overall volatility compared to beta, and you can leverage it to your heart's content.
Unfortunately, it won't find anything new. More unfortunately, you probably need 6-10 years and do a walk forward to see if the overall method is trustworthy.
I bet it doesn't achieve a single successful (long term) trading strategy for FUTURE trades. Easy to derive a successful trading strategy on historical data, but so naive to think that such a strategy will continue to be successful in the long term into the future.
If you do, come back to me and I’ll will give you one million USD to use it - I kid you not. Only condition is your successful future trading strategy must solely be based on historical data.
I've used/use both, and find them pretty comparable, as far as the actual model backing the tool. That wasn't the case 9 months ago, but the world changes quickly.
None of them are particularly sticky - you can move between them with relative ease in vscode for instance.
I think the only moat is going to be based on capacity, but even that isnt going to last long as the products are moved away from the cloud and closer your end devices.
To quote The Godfather II, "This is the business we have chosen."
The most popular and important command line tools for developers don't have the consistency that Claude Code's command line interface does. One reason Claude Code became so popular is because it worked in the terminal, where many developers spend most of their time. But using tools like Claude Code's CLI is a daily occurrence for many developers. Some IDE's can be just as difficult to use.
For people who don’t use the terminal, Claude Code is available in the Claude desktop app, web browsers and mobile phones. There are trade-offs, but to Anthropic’s credit, they provide these options.
But for something like Claude Code there are unlimited things you can do with it, so it's better for them to accept a free-form input.
edit: removed obnoxious list in favor of the link that @thehamkercat shared below.
My favorite is IS_DEMO=1 to remove a little bit of the unnecessary welcome banner.
Which for the record : hasn't actually happened since I started using it like that.
One thing to be aware of with the pure devcontainer approach: your workspace is typically bind-mounted from the host, so the agent can still destroy your real files. Network access is also unrestricted by default. The container gives you process isolation but not file or network safety.
I'm paranoid about rogue AIs, so I try to make everything safe-by-default: the agent works on a copy of your workdir, you review a unified diff when it's done, and you apply only what you want. So your originals are NEVER touched until you explicitly say so, and network can be isolated to just the agent's required domains.
Anyway, here's what I think will work as my next yoloAI feature: a --devcontainer flag that reads your existing devcontainer.json directly and uses it to set up the sandbox environment. Your image, ports, env vars, and setup commands come from the file you already have. yoloAI just wraps it with the copy/diff/apply safety layer. For devcontainer users it would be zero new configuration :)
This is a bit intense.
I asked chatgpt to chart the number of new bullet points in the CHANGELOG.md file committed by day. I did nothing to verify accuracy, but a cursory glance doesn't disagree:
Ctrl + _ (Ctrl + underscore)
Applies to the line editor outside of CC as well.it's almost like if the thing is not intelligent at all and just another abstraction on top of what we already had.
> .claude/rules/.md Project rules
> ~/.claude/rules/.md User rules
or is it just a way to organise files to be imported from other prompts?