Personally I'll wait for OpenAI to add this feature directly. I'm sure they're working on it.
I don't want this solution delivered in the form of an extension (one practical reason is I use ChatGPT from mobile a lot of the time). I have 0 extensions installed in general.
Larrikin 24 days ago [-]
Why do you subject yourself to web ads?
winternewt 22 days ago [-]
How else do you propose that the web sites you visit should fund their work and infrastructure?
snypher 22 days ago [-]
Micropayments for page views. Just let me pay you what the ad guys are paying you and I'll do it.
FractalHQ 18 days ago [-]
This is what Brave browser tried to do, but everyone on HN had a spasm cus “boo crypto”.
winternewt 16 days ago [-]
Also flattr, long ago. That failed too, unfortunately.
az09mugen 22 days ago [-]
With brave mobile you have no ads, and no extension needed.
theropost 24 days ago [-]
I regularly use the request data button on ChatGPT, then parse the JSON and output HTML for each conversation, and a DB to search for the file(s) when needed.
amelius 24 days ago [-]
I wish I could search through my chats more easily.
But I don't want to download extensions, they are too security-unfriendly.
I learned today, that o-1 is able to search through all chats and can find and verify if the findings are relevant to the actual context. i found that very usefull as i have a lot of very long chats regarding only one project.
ChatGpt lists the findings with the date and context and searches further back if asked for it. (in my case summer 2024)
graeme 24 days ago [-]
Wait, how do you do that?
rallyforthesun 23 days ago [-]
I did just ask to remember from our last chats, if we hadn’t this particular bug discussed before.
martylamb 22 days ago [-]
I feel the same. You might want to check out https://martiansoftware.com/chatkeeper for a non-extension option. It's a cli that syncs a ChatGPT export with local markdown files. (full disclosure: it's my project)
kccqzy 23 days ago [-]
Why not just copy paste these chats and save them into a local file?
I mean I've been copying chats with my friends and saving them locally since the time online chats were called Instant Messaging.
24 days ago [-]
ramoz 24 days ago [-]
I’ve always wanted better search and chat organization.
But I’m at a place where I can’t determine if the ephemeral UX of chatting with AI (ChatGPT, Claude) isn’t actually better. Most chats I want to save these days are things like code snippets that I’m not ready to integrate yet.
rubymamis 24 days ago [-]
You could join my native, cross-platform client waitlist[1] if you're looking to use your OpenAI API key. Work-in-progress but it's coming along pretty fast.
You might want to check out <https://martiansoftware.com/chatkeeper>. It's a cli that syncs a ChatGPT export with local markdown files. I use it to keep my conversation history in Obsidian, where I can search through and link my conversations to my other notes. (full disclosure: it's my project)
s-sameer 24 days ago [-]
That is a perfect use case for having an extension like this. It makes it easier for you to jump back into a previous conversation and is primarily what I use for as well.
behnamoh 24 days ago [-]
The fact that you even need something like this shows how far we are from truly useful language models. Because ideally they should have all of the context of all of the messages in their mind, and so far we've had to manually manage that context for them.
themanmaran 24 days ago [-]
To be fair this is less a language model problem, and more in the application layer around them.
Theoretically with an infinite context window a model would just work fine forever by shoving the entire conversation history into context with each request. But a message search/retrieval makes a lot more sense.
I think the long term AI chat is just relatively new as a UI pattern, and so it takes time to build patterns around it.
Ex: in 2023 I told GPT to answer all questions like a pirate. I never told it to stop doing that, so if we're loading every historical chat in memory, should it still be answering as a pirate?
Vampiero 24 days ago [-]
> Theoretically with an infinite context window a model would just work fine forever by shoving the entire conversation history into context with each request. But a message search/retrieval makes a lot more sense.
Nope, with an infinite context window the LLM would take forever to give you an answer. Therefore it would be useless.
We don't really have such a thing as a context window, it's an artifact of LLM architecture. We are building a ton of technology around it but who's to say it's the right approach?
Maybe the best AIs will only use a very tiny LLM for actual language processing while delegating storage and compression of memories to something that's actually built for that.
layer8 23 days ago [-]
You need something like this if you want to use them as a reminder. Even if LLMs could remind you of past chats, they wouldn’t know which chats you want to be reminded of. It’s like marking chats as favorites. You actually have to mark them yourself, for anyone to know which chats are your favorites.
jaredsohn 23 days ago [-]
OpenAI has already built some of this but maybe it requires a paid account.
Since then, for whatever reason, it's not available for my account (I'm on a Plus plan).
s-sameer 23 days ago [-]
Thanks for sharing that...I don't use the pro plan so I built this extension to allow everyone to bookmark their important chats for free
s-sameer 23 days ago [-]
all chats are stored locally, so there are no privacy concerns either
dtagames 22 days ago [-]
I made a tool called Slate AI[0] that you can run locally. It organizes your chat threads and image generations into tabs that you can save on your local disk (as JSON files in Open AI API format) and reload to continue later. Exports markdown, too.
Another feature that I find shockingly absent from
most web-based chat providers is autocomplete, I.e copilot-like suggestions to complete what your typing. Typing long text into chat boxes quickly becomes tedious and having context-based autocomplete helps a lot — you can experience this within AI-IDEs like zed or cursor etc — in fact I often resort to using those just for this feature.
brettgriffin 24 days ago [-]
There are two technologies I use every day that demonstrate a company is capable of solving an incredibly hard problem, X, while completely dropping the ball on the presumable easier part of UX, Y. ChatGPT is one of those. Driving in my Tesla is the other. I'm not sure how or why it happens but I think it about it daily.
tmpz22 24 days ago [-]
Ineffective dog fooding. PMs might use it every day but they only use a subset of functionality. Some engineers may intentionally never use it when they get home because they’re so sick of looking at it. Some engineers doing crazy esoteric but it doesn’t propagate because their heads are down within the org. Most people are only showcasing exclusively happy paths to leadership, sorry I meant management. Executives only using it for emails, demos, and again a limited subset of happy paths.
Just burnout, siloing, and a lack of creativity. We can’t solve these problems in the industry because we are greedy short term thinkers who believe we’re long term innovators. To say nothing of believing we are smarter and more entitled then we are
kraftman 24 days ago [-]
But each chat has a unique link that you can just bookmark right?
graeme 24 days ago [-]
Is it possible to bookmark a chat on mobile? Haven't found a way to do so on ios
Claude has a way to star important conversations. Don't think chatgpt has that.
My only solution so far has been aggressively deleting conversations once I find and answer and know I don't need it for reference.
fragmede 24 days ago [-]
in the ChatGPT iOS app, I can long click on the chat itself on the left sidebar, and one of the options is"share chat".
graeme 24 days ago [-]
Ah. That one actually makes a public link and doesn't work if there are images or under some circumstances.
On desktop you can directly copy the url for reference and open it later
brettgriffin 24 days ago [-]
Of course. But I use it dozens of times a day across dozens of projects. Many of the concepts are linked together. Intelligently indexing, linking, and referencing them seems like a pretty obvious feature. I doubt I'm in the minority in expecting this.
s-sameer 24 days ago [-]
It basically offers a much better user experience, than manually bookmarking each link
perchard 24 days ago [-]
perhaps Y is harder to solve than you are assuming
llamaimperative 24 days ago [-]
"Harder [for the organization in question] to solve" is definitely right
Not really an excuse though, since a product company's mandate is to create a product that doesn't leave its customers baffled about apparently missing functionality.
s-sameer 24 days ago [-]
lol ikr, its crazy this doesn't already exist
24 days ago [-]
Kerbonut 23 days ago [-]
Honestly I'm pretty sick of ChatGPT anymore. It completely ignores custom instructions, loses context insanely quickly, has bugs with working with canvas where it puts the code in the chat instead of updating canvas, the Project feature is half baked and is very terrible experience, GPTs are just really stupid and also half baked. I started using Le Chat (Mistral) again and, honestly, the conversations there are much more fun. Tons of issues with that one as well, but I am happier using it haha. Ended up using a desktop app that lets me control the system prompt against Mistral's API and couldn't be happier.
memhole 23 days ago [-]
I’ve been wondering if OpenAI’s updates ruin other applications or utility? Kinda makes me skeptical of global models. Obviously OpenAI is optimizing and changing how they train their models. Makes sense to me you’d lose some of the quirks. I think this is actually the promise of open weight models. Personally, I’ve never used ChatGPT, Claude, whatever. I’ve found a decent amount of utility in just the open weight models.
Kerbonut 23 days ago [-]
I am right there with you on the open weight models. Only reason I started back on ChatGPT is because I'm rebuilding my servers and had to take out my 3090Ti. That thing is huge and I needed the room for hard drives until my 3.5" to 5.25" adapters come in.
maxbaines 24 days ago [-]
I built my own client (llmpad.com) to originally solve this problem, as well as using other LLM's and features. A little surprised others have not done this too?
You can try llmpad soon, feel free to message me.
consumer451 22 days ago [-]
Just a reminder that LibreChat exists. It's FOSS, and lets you bring your own APIs key for all the LLM providers. The UI is excellent.
You can run it locally, or as I do on a $5/month Linode server. I don't want to pay ~20/month for each LLM provider, so I put $5 to $10 on my Anthropic and OpenAI API accounts every couple months, and that lasts me plenty long.
You get to save all your chats, change models mid-chat, view code artifacts, create presets, and much more.
If you don't know how to set up something like this, ask ChatGPT or Claude. They will walk you through it, and you will learn a useful skill. It's shockingly easy.
I don't want this solution delivered in the form of an extension (one practical reason is I use ChatGPT from mobile a lot of the time). I have 0 extensions installed in general.
But I don't want to download extensions, they are too security-unfriendly.
I mean I've been copying chats with my friends and saving them locally since the time online chats were called Instant Messaging.
But I’m at a place where I can’t determine if the ephemeral UX of chatting with AI (ChatGPT, Claude) isn’t actually better. Most chats I want to save these days are things like code snippets that I’m not ready to integrate yet.
[1] https://www.get-vox.com/
Theoretically with an infinite context window a model would just work fine forever by shoving the entire conversation history into context with each request. But a message search/retrieval makes a lot more sense.
I think the long term AI chat is just relatively new as a UI pattern, and so it takes time to build patterns around it.
Ex: in 2023 I told GPT to answer all questions like a pirate. I never told it to stop doing that, so if we're loading every historical chat in memory, should it still be answering as a pirate?
Nope, with an infinite context window the LLM would take forever to give you an answer. Therefore it would be useless.
We don't really have such a thing as a context window, it's an artifact of LLM architecture. We are building a ton of technology around it but who's to say it's the right approach?
Maybe the best AIs will only use a very tiny LLM for actual language processing while delegating storage and compression of memories to something that's actually built for that.
https://mashable.com/article/chatgpt-chat-history-search-int...
It also has projects (says is for Plus, Team, and Pro users). https://help.openai.com/en/articles/10169521-using-projects-...
Any outputs they generate that one finds useful need to be retained outside their walled-garden.
Since then, for whatever reason, it's not available for my account (I'm on a Plus plan).
[0] https://github.com/garranplum/slate-ai-public
Just burnout, siloing, and a lack of creativity. We can’t solve these problems in the industry because we are greedy short term thinkers who believe we’re long term innovators. To say nothing of believing we are smarter and more entitled then we are
Claude has a way to star important conversations. Don't think chatgpt has that.
My only solution so far has been aggressively deleting conversations once I find and answer and know I don't need it for reference.
On desktop you can directly copy the url for reference and open it later
Not really an excuse though, since a product company's mandate is to create a product that doesn't leave its customers baffled about apparently missing functionality.
You can run it locally, or as I do on a $5/month Linode server. I don't want to pay ~20/month for each LLM provider, so I put $5 to $10 on my Anthropic and OpenAI API accounts every couple months, and that lasts me plenty long.
You get to save all your chats, change models mid-chat, view code artifacts, create presets, and much more.
If you don't know how to set up something like this, ask ChatGPT or Claude. They will walk you through it, and you will learn a useful skill. It's shockingly easy.
https://librechat.ai
https://github.com/danny-avila/LibreChat