I'm curious, what specifically you're interested in from openrouter? Do they offer specific LLMs, or is it just a great way to kick the tires on new models as they emerge?
The original intention behind t2x was to be "functionality first", and make an opinionated call on which LLMs get used for specific functionality. However I think the way to go will be sane defaults and then allow user-specified models from something like openrouter.
jakedahn 19 days ago [-]
hi friends!
I'm the author of t2x and shrugginface.com, I'm surprised to see this landed on HN a month after I first shared it.
I originally planned to make it real during the holiday break but got distracted by other projects. This is the inspiration bump I needed, so I'll get back to shipping over the coming weeks
marckohlbrugge 18 days ago [-]
Hope it’s okay I shared it here. Saw it in my RSS reader and figured HN would enjoy it.
Glad to hear it served as inspiration. The video looks promising!
youssefabdelm 19 days ago [-]
I wish someone would do something like this but without the preamble like "t2x ask 'bla bla'"
All I want to do is talk to my computer, like:
"Sort these files" or whatever. Straight away. No noise.
And it should have a history of what was done already, so it knows what I'm talking about.
If people are concerned about safety, it could just be a 'mode' or an app that isn't a terminal but looks like a terminal.
But it requires you to pipe: `ls | codespin go sort`. Maybe I should add what you're suggesting.
jakedahn 19 days ago [-]
If you run `ls -la | t2x "sort these files alphabetically"`, it will do the right thing.
However, subcommand like `t2x ask` are used to route you to a different model with different behavior. The ask subcommand currently makes requests to perplexity, where you can ask questions and get near realtime grounding from the world.
satisfice 19 days ago [-]
Why do you want this? Not only is it unreliable and inefficient to do such things with LLMs, but we already have easy ways to do them with existing tools.
I work with computers and do data wrangling on a daily basis. What am I missing?
w0m 18 days ago [-]
I use a cli tool 100x/day for misc things; can hit openai or a local llm.
Simon's LLM tool is probably the best of any of these sorts of things! So thank you for making it, I still use it regularly to kick the tires on new models, and it served as a significant inspiration for the design of t2x.
The idea behind t2x was to make a stripped-down cli with a "functionality first" mindset and remove the actual selection of specific models from the user's flow.
The only difference is that you provide subcommands, while aichat requires users to create various roles.
t2x 'Write a ...' => aichat 'Write a ...'
t2x ask file.md 'What is...' => aichat -f file.md 'What is...'
t2x summary /path/to/file => aichat -r summary -f /path/to/file
t2x summary "This is a..." => aichat -r summary "This is a..."
t2x orc /path/to/image.png => aichat -r orc -f /path/to/image.png
> Note, In AIChat, `-r/--role` specifies the role to use, `-f/--file` indicates the file or URL input.
that would be sweet
The original intention behind t2x was to be "functionality first", and make an opinionated call on which LLMs get used for specific functionality. However I think the way to go will be sane defaults and then allow user-specified models from something like openrouter.
I'm the author of t2x and shrugginface.com, I'm surprised to see this landed on HN a month after I first shared it.
I originally planned to make it real during the holiday break but got distracted by other projects. This is the inspiration bump I needed, so I'll get back to shipping over the coming weeks
Glad to hear it served as inspiration. The video looks promising!
All I want to do is talk to my computer, like:
"Sort these files" or whatever. Straight away. No noise.
And it should have a history of what was done already, so it knows what I'm talking about.
If people are concerned about safety, it could just be a 'mode' or an app that isn't a terminal but looks like a terminal.
There's aichat, which you can use in REPL mode to do exactly that: https://github.com/sigoden/aichat
Shameless plug: I mostly use my own work: https://github.com/codespin-ai/codespin
But it requires you to pipe: `ls | codespin go sort`. Maybe I should add what you're suggesting.
However, subcommand like `t2x ask` are used to route you to a different model with different behavior. The ask subcommand currently makes requests to perplexity, where you can ask questions and get near realtime grounding from the world.
I work with computers and do data wrangling on a daily basis. What am I missing?
https://github.com/TheR1D/shell_gpt
Has nice shell integration.
The idea behind t2x was to make a stripped-down cli with a "functionality first" mindset and remove the actual selection of specific models from the user's flow.