Using Google now feels completely lackluster in comparison.
I've noticed the same thing happening in my circle of friends as well—and they don’t even have a technical background.
How about you?
Using Google now feels completely lackluster in comparison.
I've noticed the same thing happening in my circle of friends as well—and they don’t even have a technical background.
How about you?
329 comments
You hear about this new programming language called "Frob", and you assume it must have a website. So you google "Frob language". You hear that there was a plane crash in DC, and assume (CNN/AP/your_favorite_news_site) has almost certainly written an article about it. You google "DC plane crash."
LLMs aren't ever going to replace search for that use case, simply because they're never going to be as convenient.
Where LLMs will take over from search is when it comes to open-ended research - where you don't know in advance where you're going or what you're going to find. I don't really have frequent use cases of this sort, but depending on your occupation it might revolutionize your daily work.
Just yesterday I was trying to remember the name of a vague concept I’d forgotten, with my overall question being:
“Is there a technical term in biology for the equilibrium that occurs between plant species producing defensive toxins, and toxin resistance in the insect species that feed on those plants, whereby the plant species never has enough evolutionary pressure to increase it’s toxin load enough to kill off the insect that is adapting to it”
After fruitless searching around because I didn’t have the right things to look for, putting the above in ChatGPT gave an instant reply of exactly what I was looking for:
“Yes, the phenomenon you're describing is often referred to as evolutionary arms race or coevolutionary arms race.”
Evolutionary arms race is somewhat tautological; an arms race is the description of the selective pressure applied by other species on evolution of the species in question. (There are other, abiotic sources of selective pressures, e.g. climate change on evolutionary timescales, so while 'evolution' at least carries a broader meaning, 'arms race' adds nothing that wasn't already there.)
That said, using your exact query on deepseek r1 and claude sonnet 3.7 both did include red queen in their answers, along with other related concepts like tit for tat escalation.
Firstly, "Evolutionary Arms Race" is not tautological, it is a specific term of art in evolutionary biology.
Secondly, "evolutionary arms race" is a correct answer, it is the general case of which the Red Queen hypothesis is a special case. I do agree with you that OP described a Red Queen case, though I would hesitate to say it was because of "equilibrium"; many species in Red Queen situations have in fact gone extinct.
https://en.wikipedia.org/wiki/Evolutionary_arms_race
https://en.wikipedia.org/wiki/Red_Queen_hypothesis
At any rate, almost NONE of these actual terms of art are about the sort of equilibrium that was the exact heart of the OP's query to the LLM, and thus nearly none of the broader umbrella 'arms race' is about why the plant doesn't have the evolutionary pressure to actually drive the parasite extinct. An arms race doesn't have to be in equilibrium. Armor vs weapons were in an arms race and indeed at equilibrium for millenia, but then bullets come along and armor goes exinct almost overnight and doesn't reappear for 5 centuries. Bullets win the arms race. Arms races have nothing to do, inherently, with equilibrium.
You seem to have misunderstood the nature of the equilibrium in a Red Queen scenario, which is the fundamental effect that the hypothesis is directly named for. That species that are in Red Queen relationships can go extinct is in no way a counterargument to the idea that two (or more) species tend to coevolve in such a way that the relative fitness of each (and of the system as a whole) stays constant. See, for example, the end of the first paragraph on the origin of Van Valen's term at your own wiki link.
Evolutionary steady-state is a synonymous term without the baggage of the literary reference and also avoids the incorrect connotation suggested by arms race that leads people to forget the abiotic factors that are often a dominant mechanism in extinctions as the realized niche vs the fundamental niche differ. Instead, Van Valen was specifically proposing the Red Queen hypothesis as an explanation of why extinction appears to be a half-life, i.e. of a constant probability, rather than a rate that depends on the lifetime of the taxa. This mechanism has good explanatory power for the strong and consistent evidence that speciation rate (usually considered as the log of the number of genera, depending on definition, see Stanley's Rule) has a direct and linear relation with the extinction rate. If Red Queened species didn't go exinct, Van Valen wouldn't have needed to coin the term to explain this correlation.
Or were you deliberately invoking Cunningham's Law?
GP was looking for a specific term that they had heard before. It was co-/evolutionary arms race, and ChatGPT guessed it correctly.
Also GPT-4o elaborated the answer (for me at least) with things like:
Or updated for the LLM age, "the best way to get the right answer from an LLM is not to ask it a question and use its answer; it's to post its response on a site of well-educated and easily nerdsniped people"
Time to pop some popcorn and hit refresh.
They’re very helpful for helping me ask more refined questions by getting the terminology correct.
I think of AI as an intelligent search engine / assistant and, outside of simple questions with one very specific answer, it just crushes search engines.
I use the LLMs to find the right search terms, and that combination makes search engines much more useful.
LLM by themselves give me very superficial explanations that don't answer what I want, but they are also a great starting point that will eventually guide me to the answers.
* Gemini lets you do this by actually streaming your screen through and verbally chatting about whatever is on screen. While interesting, I find the responses to be a little less thorough. YMMV.
Even with supposedly authoritative peer-reviewed research papers it is extremely frequent to find errors whenever the authors claim to quote earlier work, because the reality is that most of them do not bother to read carefully their claimed bibliography.
When you get an answer from an AI, the chances greatly increase that the answer regurgitates some errors present in the publications used for training. At least when you get the answer from a real book or research paper, it lists its sources and you can search them to find whether they have been reproduced rightly or wrongly. With an AI-generated answer it becomes much more difficult to check it for truthfulness.
I will give an example of what I mean, on which I happened to stumble today. I have read a chemistry article published in 2022 in a Springer journal. While the article also contained various useful information, it happened to contain a claim that seemed suspicious.
In 1782, the French chemist Guyton de Morveau has invented the word "alumine" (French) = "alumina" (Latin and English), to name what is now called oxide of aluminum, which was called earth of alum at that time ("terra aluminis" in Latin).
The article from 2022 claimed that the word "alumina" had already been used earlier with the same sense, by Andreas Libavius in 1597, who has been thus the creator of this word.
I have found this hard to believe, because the necessity for such a word has appeared only during the 18th century, when the European chemists, starting with the Swedish chemists, have finally gone beyond the level of chemical classification inherited from the Arabs and they have begun to classify all known chemical substances as combinations of a restricted set of primitive substances.
Fortunately, the 2022 article had a detailed bibliography, and using it I was able to find the original work from 1597 and the exact paragraph in it that was referred to. The claim of the 2022 article was entirely false. While the paragraph contained a word "alumina", that was not a singular feminine adjective (i.e. agreeing with "terra") referring to the "earth of alum". Instead of this, it was not a new word, but just the plural of the neuter word "alumen" (= English alum), in the sentence "alums or salts or other similar sour substances can be mixed in", where "alums" meant "various kinds of alum", like "salts" meant "various kinds of salt". Nowhere in the work of Libavius there was any mention of an earth that is a component of alum and that could be extracted from alum (in older chemistry, "earth" was the term for any non-metallic solid substance that neither dissolves in water nor burns in air).
I have given in detail this example, in order to illustrate the kinds of errors that I very frequently encounter whenever some authors claim to quote other works. While this was an ancient quotation, lots of similar errors appear when quoting more recent publications, e.g. when quoting Einstein, Dirac or the like.
I am pretty sure that if I would ask an AI assistant something, the number of errors in the answers will not be less than when reading publications written by humans, but the answers will be more difficult to verify.
Whoever thinks that they can get a quick answer to any important question in a few seconds and be done with it, are naive because the answer to any serious question must be verified thoroughly, otherwise there are great chances that those who trust such answers will just spread more disinformation, like the sources on which the AI has been trained.
Despite a lot of effort, I'm just not a highly skilled developer and I don't have any friends / colleagues I can turn to for assistance (I don't know a single software developer or even another person who enjoys video games). While resources like StackOverflow are certainly useful, having answers tailored to my specific situation really accelerates progress.
I'm not trying to cure cancer here and much of what would be considered the "best approach" for a small game architecture is unique to the developer. As such, AI is an incredible resource to lean on and get information tailored to my unique use case (here is my code...how does {topic} apply to my situation?")
And yes, I find errors from time to time, but that is good. It keeps me on my toes and forces me to really understand the response / perspective.
Google 55% as GPT is not a local search engine
GPT 45% but use it for more intelligent learning/conversations/knowledgebase.
If I had a GPT phone ... sorta like H.E.R. the movie I would rarely leave my phone's lockscreen. My AI device / super AI human friend would do everything for me including get me to the best lighting to take the best selfies...
For example: Take the ingredient list of a cosmetic or other product that could be 30-40 different molecules and ask ChatGPT to list out what each of them is and if any have potential issues.
You can then verify what it returns via search.
You can criticize LLMs all you want, but the fact is that they provide value to people in ways that alternatives simply don’t. The energy consumption is a concern, but don’t pretend there are viable alternatives when there aren’t.
The LLM people are heavily invested in ever bigger models to keep the research money flowing in, it wouldn't make sense to release a service that undercuts that.
that leaves independent actors - presumably building and maintaining an up to date database is difficult, so only the big search engines do.
LLMs store embeddings of individual tokens (usually parts of words), so a result of an actual search will be top-k embeddings and the corresponding tokens, similar to the output of a Google search. You could extract the initial matrix of embeddings from some open-weights model and find tokens closest to your query. However, it's not clear why do this. OP got coherent text, so that's not search.
It's _similar_, though, because attention in LLMs basically looks for most similar tokens. So to answer the question about the term, the LLM had to create a stream of tokens that's semantically closest to the given description. Well, this is somewhat like a search, but it's not exactly the same.
The reason is pretty simple. If the result you want is in the first few search hits, it's always better. Your query is shorter so there is less typing, the search engine is always faster, the results are far better because you side step the LLM hallucinating as it regurgitates the results it remembers on the page your would have read if you searched.
If you aren't confident of the search times, it can take 1/2 an hour of dicking around with different terms, clicking though a couple of pages of search results for each set of term, until you finally figure out the lingo to use. Figuring out what you are really after from that wordy description is the inner magic of LLM's.
Most often not true in the kind of searches I do. Say, I search for how to do something in the Linux terminal (not just the command, but the specific options to achieve a certain thing). Google will often take me to pages that do have the answer, but are full of ads and fluff, and I have to browse through several options until I find the ones I want. ChatGPT just gives me the answer.
And with any halfway decent model, hallucination only seems to be a problem in difficult or very specialized questions. Which I agree shouldn't be asked to LLMs (or not without verifying sources, at least). But over 90% of what I search aren't difficult or specialized questions, they're just things I have forgotten, or things that are easy but I don't know just because they're not in my area of expertise. For example as a learner of Chinese, I often ask it to explain sentences to me (translate the sentence, the individual words, and explain what a given word is doing in the sentence) and for that kind of thing it's basically flawless, there's no reason why it would hallucinate as such questions are trivial for a model having tons of Chinese text in its training set.
I asked Claude to give me a recipe that uses mushrooms and freezes well and it give me a decent looking soup recipe. It might not be the best soup ever, but it's soup, kinda hard to mess up. The alternative would be to get a recipe from the web with a couple dozen paragraphs about how this is the bestest soup ever and it comes from their grandma and reminds them of summer and whatnot.
It didn't suggest adding glue? I imagine it would freeze real well if you did that. /s
Interesting, I just random words. LLM not care sentence.
But what I'm talking about is when I want to read the page for myself. Waste of time to have to wait for an LLM to chew on it.
Really, for many “page searches”, a good search engine should just be able to take you immediately to the page. When I search “Tom Hanks IMDB”, there’s no need to see a list of links - there’s obviously one specific page I want to visit.
https://notes.npilk.com/custom-search
Are you feeling lucky?
Unfortunately you can’t really show ads if you take someone directly to the destination without any interstitial content like a list of links…
I know what I'm looking for. I just need exact URL.
Perplexity miserably fails at this.
Grok is great for finding details and background info about recent news, and of course it's great for deep-diving on general knowledge topics.
I also use Grok for quick coding help. I prefer to use AI for help with isolated coding aspects such as functions and methods, as a conversational reference manual. I'm not ready to sit there pretending to be the "pilot" while AI takes over my code!
For the record, I do not like Google's AI generated results it spams at me when I search for things. I want AI when I choose to use AI, not when I don't choose it. Google needs a way to switch that off on the web (without being logged in).
Traditional search is dead, semantic search through AI is alive and well.
I can't yet count once AI misunderstood the meaning of my search while Google loves to make assumptions, rewrite my search query, and deliver the results that pay it the best which have the best ads (in my opinion as a lifetime user).
Lets not even mention how they willingly accept misleading ads atop the results which trick the majority of common users into downloading malware and adware on the regular.
The reason Google is still seeing growth (in revenue etc.) is that for a lot 'commercial' search still ends with this kind of action.
Take purchasing a power drill for example, you might use an LLM for some research on what drills are best, but when you're actually looking to purchase you probably just want to find the product on Home Depot/Lowe's etc.
Ad-sponsored models are going to be dead as soon as people realize they can't trust output.
And because the entire benefit to LLM search is the convenience of removing a human-in-the-loop step (scanning the search results), there won't be a clear insertion/distinction point for ads without poisoning the entire output.
Those who can afford, buy unbiased.
Those who cannot, accept biased free services.
I suppose that's Google's hope for how future search turns out.
Over time subscription models will converge to subscription with advertisements. Like newspapers did.
Ads xor payment, or else you can fuck all the way off.
What you don't get with pay TV
Available eyeballs will be sold.Example from my work. Many of our customers will search for our company name in order to find the login page. I've watched them do this over screen share.
When they do that, the top search result is an ad slot. The second search result is our login page.
We buy ads against out own company name so that we can be the ad slot as well as the first result. Otherwise a competitor who buys ads against our company name could be the top slot.
What? On Planet Earth, this is already a thing.
Kind of like a manual, with an index.
RTFM people.
Sounds trivial to integrate an LLM front end with a search engine backend (probably already done), and be able to type "frob language" and it gives you a curated clickable list of the top resources (language website, official tutorial, reference guide, etc) discarding spam and irrelevant search engine results in the process.
https://news.ycombinator.com/item?id=9224
The LLM could "intelligently" pick from the top several pages of results, discard search engine crap results and spam, summarize each link for you, and so on.
We don't have that now (or for 30 years - I should know, I was there, using Yahoo!, and Altavista, and Lycos and such back in the day).
Or any other LLM that’s continuously trained on trending news?
Instead of the core of the answer coming from the LLM, it could piece together a few relevant contexts and just provide the glue.
How do you know the media isn't lying to you ? It's happened many times before (think pre-war propaganda)
Odds are pretty good that, at least for not very popular projects, the homepage's themselves would soon be produced by some LLM, and left at that, warts and all...
> In responding to user queries, Grok has a unique feature that allows it to decide whether or not to search X public posts and conduct a real-time web search on the Internet. Grok’s access to real-time public X posts allows Grok to respond to user queries with up-to-date information and insights on a wide range of topics.
Other considerations:
- Visiting the actual website, you’ll see the programming languages logo. That may be a useful memory aide when learning.
- The real website may have diagrams and other things that may not be available in your LLM tool of choice (grok).
- The ACT of browsing to a different web page may help some learners better “compartmentalize” their new knowledge. The human brain works in funny ways.
- i have 0 concerns of a hallucination when readings docs directly from the author/source. Unless they also jumped on the LLM bandwagon lol.
Just because you have a hammer in your hand doesn’t mean you should start trying to hammer everything around you friend. Every tool has its place.
For some cases I absolutely prefer an LLM, like discoverability of certain language features or toolkits. But for the details, I'll just google the documentation site (for the new terms that the LLM just taught me about) and then read the actual docs.
I'm hard pressed to construction an argument where, with widely-accessible LLM/LAM technology, that still looks like:
Summarization and deep-indexing are too powerful and remove the necessity of steps 2-4.F.ex. with the API example, why doesn't your future IDE directly surface the API (from its documentation)? Or your future search directly summarize exactly the part of the API spec you need?
And if you are reading technical docs, especially good ones, each word is there for a reason. LLM throw some that information away, but they don't have your context to know if the stuff they throw away is useful or not. The text the summary omitted may likely contain an important caveat or detail you really should have known before starting to use that API.
I recently configured Chrome to only use google if I prefix my search with a "g ".
I don't like LLMs for two reasons:
* I can't really get a feel for the veracity of the information without double checking it. A lot of context I get from just reading results from a traditional search engine is lost when I get an answer from a LLM. I find it somewhat uncomfortable to just accept the answer, and if I have to double check it anyways, the LLM's answer is kind of meaningless and I might as well use a traditional search engine.
* I'm missing out on learning opertunities that I would usually get otherwise by reading or skimming through a larger document trying to find the answer. I appreciate that I skim through a lot of documentation on a regular basis and can recall things that I just happened to read when looking for a solution for another problem. I would hate it if an LLM would drop random tidbits of information when I was looking for concrete answers, but since its a side effect of my information gathering process, I like it.
If I were to use an AI assistant that could help me search and curate the results, instead of trying to answer my question directly. Hopefully in a more sleek way than Perplexity does with its sources feature.
At least that has been my experience. I admit I don't use LLMs very much.
1. Rules that get prefixed in front of your prompt as part of the real prompt ChatGPT gets. Like what they do with the system prompt.
And
2. Some content makes your prompt too big for the context windows where the rules get cut off.
Then, it might help to measure the tokens in the overall prompt, have a max number, and warn if it goes over it. I had a custom, chat app that used their API's with this feature built in.
Another possibility is, when this is detected, it asks you if you want to use one with a larger, context window. Those cost more. So, it would be presented as an option. My app let me select any of their models to do that manually.
I really like not being complimented on literally everything with a wall of text anymore.
Also, if that works, why doesn't copilot/cursor write lots of excessive code mixed with lots of prose only to distill it later?
The “thinking” models are really verbose output models that summarise the thinking at the end. These tend to outperform non-thinking models, but at a higher cost.
Anthropic lets you see some/all of the thinking so you can see how the model arrived at the answer.
This is my main reason for not using LLMs as a replacement for search. I want an accurate answer. I quote often search for legal or regulatory issues, health, scientific issues, specific facts about lots of things. i want authoritative sources.
You check the information you decide should be verified.
An LLM response without explicit mention of its provenance... There's no way to even guess whether it is authoritative.
Actually, it might be fully unbounded even for an n of 1.
What do you even use for double-check? Some random low-quality content farm? A glitchy LLM? An dodgy mirror of official docs full of ads? Or do you actually dig the source code for this?
And do you keep double-checking with all other information on the page... "A TOMLDecodeError will be raised on an invalid TOML document." - are you going to start an interactive session and check which error will be raised?
Just because you can find multiple independent sources saying the same thing doesn't mean it's correct.
In all honesty doing this for news and such brings me comfort. Because the truth is usually pretty vanilla.
"What I tell you three times is true"
Part of why I prefer to use a search engine is that I can see who is saying it, in what context. It might be Wikipedia, but also CIA world fact book. Or some blog but also python.org.
Or (lately) it might be AI SEO slop, reworded across 10 sites but nothing definitive. Which means I need to change my search strategy.
I find it easier (and quicker) to get to a believable result via a search engine than going via ChatGPT and then having to check what it claims.
And this is how LLMs perform when LLM-rot hasn't even become widely pervasive yet. As time goes on and LLMs regurgitate into themselves, they will become even less trustworthy. I really can't trust what an LLM says, especially when it matters, and the more it lies, the more I can't trust them.
Really, these days, either I know some resource exists and I want to find it, in which case a search engine makes much more sense than an LLM which might hallucinate, or I want to know if something is possible / how to do it, and the LLM will again hallucinate an incorrect way to do it.
I've only found LLMs useful for translation, transcription, natural language interface, etc.
LLMs have mostly been useful for three things: single line code completion (in GoLand), quickly translating JSON, and generating/optimizing marketing texts.
I use LLMs as a sounding board. Often if I'm trying to tease out the shape of a concept in my head, it's best to write it out. I now do this in the form of a question or request for information and dump it into the LLM.
"Search" can mean a lot of things. Sometimes I just want a website but can't remember the URL (traditional); other times I want an answer (LLMs); and other times, I want a bunch of resources to learn more (search+LLMs).
Bad: summarizing scientific research or technical data
Great: finding travel ideas or clarifying aspects of a franchise's fictional universe.
Especially if I'm looking for a small fact buried in the first results.
Instead I use a search engine and do my own reading and filtering. This way I learn what I'm researching, too, so I don't fall into the vicious cycle of drug abu ^H^H^H^H^H laziness. Otherwise I'll inevitably rely more on more on that thing, and be a prisoner of my own doing by increasingly offloading my tasks to a black box and be dependent on it.
Google recently (unrequested) provided me with very detailed AI generated instructions for server config - instructions that would have completely blown away the server. There will be someone out there who just follows the bouncing ball, I hope they've got good friends, understanding colleagues, and good backups!
What a weird sentence. What accuracy guarantees does Kagi have? Or, if you're not "offloading your brain to it", can't you do the same with an LLM?
Moreover, Kagi is a paid service. It has no ads, no hidden ranking, nothing to earn money by manipulating you. On the contrary you, the user, can add filters and ranking modifiers to promote the sites you find to be useful/truthful and demote others which push slop and SEO optimized content to your eyeballs. This is per user, and is not meddled with.
This makes Kagi very deterministic (unlike LLMs), very controllable (unlike LLMs), and very personalized (unlike LLMs). Moreover, Kagi gives you ~20 results or so per search, and no fillers (again, unlike LLMs).
I don't use Kagi's AI assistance features, and I don't pay for the "assistant" part of it, either.
I don't offload my brain to Kagi, because I don't prompt it until it gives me something I like. Instead, I get the results, read them, learn what I'm looking for, and possibly document what I got out from that research. This usage pattern, is again very different than prompting an LLM until it gives you something somewhat works or sounds plausible.
I do the hard work of synthesizing and understanding the answer. Not reading some slop and accepting it at face value.
Similarly, I don't offload my brain to LLMs.
> I do the hard work of synthesizing and understanding the answer. Not reading some slop and accepting it at face value.
Again, it's not necessary to accept LLM output at face value.
Use tools, think for yourself, sure. This applies to various tools: Kagi, LLMs, and others. None of these give you "accuracy guarantees". You usually have to think for yourself.
My favourite example of a situation where you don't have to think for yourself is asking an LLM to implement a function in a very strongly typed language. There only is one implementation of `a -> a`. For `(a -> b) -> List a -> List b`, you could return an empty list instead of performing map. There aren't that many implementations of `(a -> b -> b) -> b -> List a -> b` (three as far as I can see: left/right fold and a function which just returns the accumulator). It's easier to verify the LLM solution than to implement it yourself!
As for AI search, I do find it extremely useful when I don't know the right words to search for. The LLM will instantly figure out what I'm trying to say.
And the ratio between using search engine and Kagi’s LLM agent with search is still 70% search. Sometimes, searching is faster, sometimes asking AI is faster.
I use LLM-s for what they are good at, generative stuff. I know some task take me a long time and I can shortcut with LLM-s easily.
So here's a ChatGPT example query* which is completely off:
https://chatgpt.com/share/67f5a071-53bc-8013-9c32-25cc2857e5...
* It's intentionally bad be able to compare with Google.
And here's the web result, which is spot on:
https://imgur.com/a/6ELOeS1
LLM's are great when you want AN answer, and not get side tracked.
Search is great when you want to know what answers are out there. The best example is Recipes... From what spices go into chai to the spice mix in any given version of chili (let's not start on beans).
The former is filling in missing knowledge the latter is learning.
https://imgur.com/a/boNS2YZ
https://chatgpt.com/share/67f5a9f9-f0a8-800d-9101-aafb88e455...
which I think is way better than google.
Google offered me a few hits with existing businesses, with ChatGPT I need to do another query.
Out of curiosity I tried it and it did take me to a wholesale company (single result), but the Google results are better with cheaper options (multiple good results), I can also parse the list faster with my eye.
Sure, I can just write a better prompt:
https://chatgpt.com/share/67f5b09b-c154-8013-840f-934af8302f...
This is my third attempt to get it right, but it found me one which I haven't seen before. However I would still do a Google search to be thorough and get the best deal.
https://chatgpt.com/share/67f5b165-b7c0-8000-b81c-4dc869e163...
It did take more than a few minutes to do the research, looks thorough though.
So yeah, I do still use search engines, specifically Kagi and (as a fallback) DuckDuckGo. From either of them I might tack on a !g if I'm dissatisfied with the results, but it's pretty rare for Google's results to be any better.
When I do use an LLM, it's specifically for churning through some unstructured text for specific answers about it, with the understanding that I'll want to verify those answers myself. An LLM's great for taking queries like "What parts of this document talk about $FOO?" and spitting out a list of excerpts that discuss $FOO that I can then go back and spot-check myself for accuracy.
Ex. https://www.google.com/search?udm=14&q=your%20query
DuckDuckGo also has an option where you can turn off the AI search so that you don't have to specify every time. I've found DDG sometimes gives me better results than Google and sometimes doesn't.
For example Jeep consistently lands at the bottom of the reliability ratings. Try asking GPT if Jeeps are reliable. The response reads like Jeep advertising.
My impression is that different llms are more or less people pleasing. I found grok is more willing to tell me something is a bad idea.
Looking at the reasoning traces for for the new reasoning models you can actually see how fine tuning is moving toward having models list the assumptions around data sources, which should be trusted, list multiple perspectives and then summarize, resulting in better answers. You can do that today with non-reasoning models, but you need to prompt engineer it to ask for that explicitly. This process of identifying not just extant content, but teaching systems how to approach problem analysis (instruction tuning, reasoning traces, etc ...) will be key to influencing how the models work and increasingly how they are differentiated.
In general, the models lean towards being Yes-Men on just about every topic, including things without official sources. I think this is a byproduct of them being trained to be friendly and agreeable. Nobody wants a product that's rude or contrarian, and this puts a huge finger on the scale. I imagine an a model unfiltered for safety and attitude and political correctness would have less of this bias (but perhaps more of other biases)
I very much prefer more disagreeable, critical models. GPT 4o and o3-mini will sometimes not tell you that you, e.g., didn't attach a file you asked to be analyzed and will instead hallucinate its contents, presumably not to upset you. Of course, their hallucinations are way more annoying.
https://chatgpt.com/share/67f57459-2744-8009-a94e-3b67dce8fd...
“[Jeeps] often score below average in reliability rankings from sources like Consumer Reports and J.D. Power.”
https://g.co/gemini/share/b5e5ea80548b
Seems entirely reasonable to me. Didn't have to trick it into providing citations.
If you want to know how modern Jeep models stack up against their peers in terms of reliability, try asking GPT that question!
Our current LLM are kneecapped because they are very reluctant to be negative.
For me, searches fall into one of three categories, none of which are a good fit for LLMs:
1. A single business, location, object, or concept (I really just want the Google Maps or Wikipedia page, and I'm too lazy to go straight to the site). For these queries, LLMs are either overkill or outdated.
2. Product reviews, setup instructions, and other real-world blog posts. LLMs want to summarize these, and I don't want that.
3. Really specific knowledge in a limited domain ("2017 Kia Sedona automatic sliding door motor replacement steps," "Can I exit a Queue-Triggered Azure Function without removing it from the queue?"). In these cases, the LLMs are so prone to hallucination that I can't trust them.
The answer I'm seeking is not always on reddit itself, but google limited to reddit is far more likely to give me quality starting links than google unbound is.
The AI word salad summaries for each individual page have no toggle (unless you count !g).
Searching 'octatrack' has the wikipedia page on the top right as a summary box thing. no ai word salad for me :shrug:
I've mostly switched to using Claude these days, with MCPs for websearch and fetching specific remote or local files. It answers questions generally very accurately (from the source documents it identifies) and includes citations.
I've found that people that haven't really tried the latest models, and just rely on whatever knowledge is in the model training are really missing out on the potential power. GPT4o+ and equivalent models really changed the game. And using tools to do a search, or pull in your code, or run a db query or whatever enables them to either synthesize information or generate context relevant material. Not perfect for everything, but much better than a year ago, or what people are doing with the free systems.
If it can't manage one small fact on something that was covered quite a bit in the previous month, then it is worse than useless. At the very least it should say I don't know. It reminds me of that one guy we all know that does nothing but make stuff up when talking about stuff outside of their wheel house. Never backs down, never learns anything, and ultimately dumped from the relationship.
Even without much customization (lenses, scoring, etc) it's so much better (for my use cases) I happily pay for it.
Recently I have also started to use Perplexity more for "research for a few minutes and get back to me" type of things.
Queries like "what was that Python package for X" I usually ask an AI right from my editor, or ChatGPT if I'm in the browser already.
2 recent success stories:
I was toying around with an esp32 - i was experimenting to turn it into a bluetooth remote control device. The online guides help to an extent, setting up and running sample projects, but the segue into deploying my own code was less clear. LLMs are "expert beginners" so this was a perfect request for it. I was able to jump from demos to live deploying my own code very quickly.
Another time I was tinkering with opnsense and setting up VLANs. The router config is easy enough but what I didnt realize before diving in was that the switch and access point require configuration too. What's difficult about searching this kind of problem is that most of the info is buried in old blog posts and forum threads and requires a lot of digging and piecing together disparate details. I wasn't lucky enough to find someone who did a writeup with my exact setup, but since LLMs are trained on all these old message boards, this was again a perfect prompt playing to its strengths.
The results from LLMs are still too slow, vary too much in quality and still frequently hallucinate.
My typical use-case is that when I'm looking for an answer I make a search query, sometimes a few. Then scan through the list of results and open tabs for the most promising of them - often recognising trusted, or at least familiar, sites. I then scan through those tabs for the best results. It turns out I can scan rapidly - that whole process only takes a few seconds, maybe a minute for the more complex queries.
I've found LLMs are good when you have open-ended questions, when you're not really sure what you're looking for. They can help narrow the search space.
At most I use AI now to speed up my research phase dramatically. AI is also pretty good at showing what is in the ballpark for more popular tools.
However I am missing forum style communities more and more, sometimes I don't want the correct answer, I want to know what someone that has been in the trenches for 10 years has to say, for my day job I can just make a phone call but for hobbies, side projects etc I don't have the contacts built up and I don't always have local interest groups that I can tap for knowledge.
LLMs can't be trusted, you have no way to tell between a correct answer and a hallucination. Which means I often end up searching what the LLM told me just to check, and it is often wrong.
Search engines can also lead you to false information, but you have a lot more context. For example, a StackOverflow answer has comments, and often, they point out important nuances and inaccuracies. You can also cross-reference different websites, and gauge how reliable the information is (ex: primary source vs Reddit post). A well trained LLM can do that implicitly, but you have no idea how it did for your particular case.
What are the specs for new Goolge Pixel 9a? LLM can't answer this may after a year they can.
Not anymore
I've always found it curious how the level of technology in Children of Men seemed to have stalled at around the time of the outbreak. From where I see it, the maturation of LLMs is the zombie outbreak for human communication.
Google seems to be better at bringing up a variety of stackoverflow and blogposts relevant to my search queries. Qwant seems to struggle exactly with that: it's great at giving exactly what I was searching for but that's sometimes not what I was looking for, if you get what I mean.
In a sense, LLM's are actually perfect for that. But like you say, the super confident hallucinations are just too frustrating. Literally every time I've asked it a serious programming question, it's hallucinated an API that doesn't exist. Everyone seems to be focusing on letting LLM's solve math and thinking problems. That's exactly what _I'm_ good at. I would much rather have an LLM that is good at combining sources and giving me facts (knowledge, rather than thinking) while most of all being able to say "I don't know".
Last night, I asked Claude 3.7 Sonnet to obtain historical gold prices in AUD and the ASX200 TR index values and plot the ratio of them, it got all of the tickers wrong - I had to google (it then got a bunch of other stuff wrong in the code).
Also yesterday, I was preparing a brief summary of forecasting metrics/measures for a stakeholder and it incorrectly described the properties of SMAPE (easily validated by checking Wikipedia).
I constantly have issues with my direct reports writing code using LLM's. They constantly hallucinate things for some of the SDK's we use.
Was a bit more useful at questions like "Rank these stocks by exposure to the Chinese market", as you can prioritise your own research but in the end you just have to go through the individual company filings yourself.
But now, the veracity of most LLMs' responses is terrible. They often include “sources” unrelated to what they say and make up hallucinations when I search for what I'm an expert in. Even Gemini in Google Search told me yesterday that Ada Lovelace invented the first programming language in the 18th century. The trust is completely gone.
So, I'm back to the plain old search. At least it doesn't obscure its sources, and I can get a sense of the veracity of what I find.
I mean, for everyone else it was never there to begin with. Hallucinations are constantly raised as the biggest issues with AI. According to the tests, and my experience, newer AI models are objectively better, not worse than the ones from a few years ago. They still have a long way to go/may never be fully trustworthy though.
What I have lost trust in, and what I and many feel has become much worse over the few years, is Google search, and all the other search engines that are based on it.
Isn't that what ChatGPT/anthropic web access does?
I recently upgraded my video card, and I run a 4K display. Suddenly the display was randomly disconnecting until I restarted the monitor. I googled my brains out trying to figure out the issue, and got nowhere.
So I gave ChatGPT a shot. I told it exactly what I upgraded from/to, and which monitor I have, and it said "Oh, your HDMI 2.0 cable is specced to work, but AMD cards love HDMI2.1 especially ones that are grounded, so go get one of those even if it's overspecced for your setup."
So I did what it said, and it worked.
A query in a regular search engine can at best perform like an LLM-based provider like Perplexity for simple queries.
If you have to click or browse several results forget it, makes no sense not to use an LLM that provides sources.
I just searched for "What is inherit_errexit?" at Perplexity. Eight sources were provided and none of them were the most authoritative source, which is this page in the Bash manual:
https://www.gnu.org/software/bash/manual/html_node/The-Shopt...
Whereas, when I searched for "inherit_errexit" using Google Search, the above page was the sixth result. And when I searched for "inherit_errexit" using DuckDuckGo, the above page was the third result.
I continue to believe that LLMs are favored by people who don't care about developing an understanding of subjects based on the most authoritative source material. These are people who don't read science journals, they don't read technical specifications, they don't read man pages, and they don't read a program's source code before installing the program. These are people who prioritize convenience above all else.
This makes a lot of sense to me. As a young guy in the 90's I was told that some day "everyone will be fluent in computers" and 25 years later it's just not true. 95% of my peers never developed their fluency, and my kids even less so. The same will hold try for AI, it will be what smartphones were to PCs: A dumbed down interface for people who want to USE tech not understand it.
[0]: not that I write blog post articles anyway, it's just a fantasy day dream thing that's been running through my head
Or I can just go to DDG/Google, and be done with it. No need to pre-load my "search engine" with context to get results.
Alt + Tab > Ctrl + T > Type > Enter > PgDn > Click > PgDn > Alt + Left > Click > PgDn > Alt + Left > Click > PgDn > Alt + Tab > [Another 45-60 minutes coding] > GOTO Start
With these keybinds (plus clicking mouse, yuck) I can read Nx sources of information around a topic.
I'm always looking to read around the topic. I don't stop at the first result. I always want to read multiple sources to (a) confirm that's the standard approach (b) if not, are there other approaches that might be suitable (c) is there anything else that I'm not aware of yet. I don't want the first answer. I want all the answers, then I want to make my own choices about what fits with the codebase that I am writing or the problem domain that I'm working in.
Due to muscle memory, the first four/five steps i can do in like one or two seconds. Sometimes less.
Switching to the browser puts my brain into "absorb new information" mode, which is a different skill to "do what IDE tells me to do". Because, as a software engineer, my job is to learn about the problem domain and come up with appropriate solutions given known constraints -- not to blindly write whatever code I'm first exposed to by my IDE. I don't work in an "IDE context". I work in a "solving problems with software context".
==
So I agree with the GP. A lot of posts I see about people saying "why not just use LLM" seem to be driven by a motivation for convenience. Or, more accurately, unconsidered/blind laziness.
It's okay to be lazy. But be smart lazy. Think and work hard about how to be lazy effectively.
For other topics, exact pedantic correctness may not always be as important, but I definitely do want to be able to evaluate my sources nevertheless, for other obvious reasons.
Search is actually pretty much what I want: a condensed list of possible sources of information for whatever I'm looking for. I can then build my own understanding of the topic by checking the sources and judging their credibility. Search seems to have been getting worse lately, sadly, but it's still useful.
If they get rid of those operators, then that would be really bad. But I have a feeling that’s what a lot of search engine people are itching to do.
Conversely it’s a huge mistake to rely on LLMs for anything that requires authoritative content. It’s not good at appropriately discounting low quality sources in my experience. Google can have a similar problem, but I find it easier to find good sources there first for many topics.
Where LLMs really replace modern google is for topics you only kind of think should exist. Google used to show some pretty tenuously related links by the time you got to page 5 results and there you might find terms that bring you closer to what you’re looking for. Google simply doesn’t do that anymore. So for me, one of the joys is being able to explore topics in a way I haven’t been able to for over a decade
However, it still blinds you to the larger picture. Providing supporting sources is all well and good, but doesn't help you with the larger view. I want the larger view.
I don't have a circle of friends, so I have no idea what other people are doing, outside of what I read online.
I use an LLM a lot for coding. However, I was never as much into doing web searches for programming problems anyway, I used docs more and rarely needed sites like SO. I haven't therefore moved away from search engines for that side of things.
With chatbots I first need to formulate a question (or, I feel like I do), then wait for it to slowly churn out an overly wordy response. Or I need to prompt it first to keep it short.
I suppose this difference is different if you already used a search engine by asking it a fully formulated question like "What is a html fieldset and how do I use it?" instead of "html fieldset" and clicking through to MDN.
By decades, I assume atleast 2. So minm 20 years. I'm very interested to know about your experience.
Would you please elaborate how do you filter or specifically what techniques you use to get your desired result?
Thanks.
I use ChatGPT for text summation and translation, and midjourney for slide decks and graphic design ideation.
I would use the analogy of consuming a perfectly tasty and nutritional meal crafted by chef chatgpt vs visiting a few restaurants around your neighborhood and tasting different cuisines. neither approach is wrong but you get different things and values out of each approach. Do what you feel like doing!
Last week, there was a specific coding problem I needed help with, I asked chatgpt which gave me a great answer. Except I spent a few hours trying to figure out why the function chatgpt was using wasn't being included, despite the #include directives being all correct. neither chatgpt nor google were helpful. The solution was to just take a different approach to my code, if I only googled, I wouldn't have spent that time chasing the wrong solution.
Also consider this, when you ask a question, there are a bunch of rude people (well meaning) that ask you questions like "what are you really trying to do?" and who criticize a bunch of unrelated things about your code/approach/question. a lot of times that's just annoying but sometimes that gives you really good insights on the problem domain.
Someone at work yesterday asked me if I knew which bus lines would be active today due to the ongoing strike. Googled, got a result, shared back in under 10 seconds.
Out of curiosity I just checked with various LLMs through t3.chat, with all kinds of features, none had anything more than a vague "check with local news" to say. Last one I tried Gemini with Deep Research and what do you know, it actually found the information and it was correct!
It also took nearly 5 minutes..
Like I feel if your search is about _reality_ (what X product should I buy, is this restoraunt good, when is A event in B city, recipes etc.) then LLMs are severely lacking.
Too slow, almost always incomplete answers if not straight up incorrect, deep research tends to work if you have 20 minutes to spare both to get an initial answer and manually go and vet the sources/look for more information in them.
If it is more of an open ended question that I am not sure there'll be a page with an answer for, I am more likely to use ChatGPT/Claude.
Same with my wife (non-technical) and teenage daughter.
People should do what makes them feel good, but I think we're all going to get a bit dumber if we rely too much on LLMs for our information.
I personally still use search engines daily when I know what it is that I am searching for. I am actually finding that I am reaching less for LLMs even though it is getting easier and cheaper (I pay for T3 Chat at $8USD p/m).
Where I find LLMs useful is when I am trying to unpack a concept or I can't remember the name of something. The result of these chats often lead to their own Google searches. Even after all this development, the best LLMs still hallucinate constantly. The best way that I've found to reduce hallucinations is to use better prompts. I have used https://promptcowboy.ai/ to some success for this.
You don't want AIs reproducing information necessarily. But they are really great at interpreting your query, digging out the best links and references using a search engine and then coming up with an answer complete with links that back that up.
I'd suggest just giving perplexity a spin for a few days. Just go nuts with it; don't hold back. It's one of the better AI driven search tools I've seen.
As an example, someone typo'd an abbreviation, so I asked GPT and it gladly made up something for me. So I gave it a random abbreviation, and it did the same (using its knowledge of the game).
Even when I tell it the specific version I'm playing it gets so much wrong it's basically useless. Item stats, where mobs are located, how to do a certain quest - anything. So I'm back to using websites like wowhead and google.
- If I am seeking technical information, I would rather get it from the original source. It is often possible to do that with a search. The output from an LLM is not going to be the original source. Even with dealing with secondary sources, it is typically easier to spot red flags in a secondary source than it is with the output of an LLM.
- I often perform image searches. I have no desire for generated images, though I'm not going to object to one if someone else "curated" the outputs of an AI model.
That said, I will use an LLM for things that aren't strictly factual. i.e. I can judge if it is good enough for my needs by simply reading it over.
Until LLMs stop responding with over confident “MBA talk” that sounds impressive but doesn’t really say much, I’ll continue to use search engines.
Image searches without having to describe every minute detail of what I'm looking for?
Bah, even some searches that are basically looking for wikipedia/historical lookups....so much easier UI in Google Search than chatgpt's endless paragraphs with unclear sources etc.
For some things Google's AI results are helpful too, if not to just narrow down the results to certain sources.
There's no chat interface helping any of this
Search is for finding specific websites and products. Totally different things.
I’m mostly using my personal SearXNG instance and am still finding what I’m looking for.
On systems where I don’t have access to that, I’m currently trying Mojeek and experiment with Marginalia. Both rather traditional search engines.
I’m not a big fan of using LLMs for this. I rather punch in 3-5 keywords instead of explaining to some LLM what I’m looking for.
Basically, there’s a lot of good and specific information on the web, but not necessarily combined in the way I want. LLMs can help break apart my specific combination at a high level but struggle with the human ability to get to solutions quickly.
Or maybe I just suck at asking questions haah
1. questions where I expect SEO crap, like for cooking recipes, are for LLMs. I use the best available LLM for those to avoid hallucinations as much as possible, 2.5 pro these days. With so much blogspam, LLMs are actually less likely to hallucinate at this point than the real internet IMO.
2. Questions whose answer I can immediately verify, like "how do I do x in language y", also go to an LLM. If the suggestion doesn't work, then I google. My stackoverflow usage has fallen to almost 0.
3. General overviews / "how is this algorithm called" / "is there a library that does x" are LLMs, usually followed by Googling about the solutions discussed.
4. When there's no answer to my exact question anywhere, or when I need a more detailed overview of a new library / language, I still read tutorials and reference docs.
5. Local / company stuff, things like "when is this place open and how do I call them" or "what is the refund policy of this store" are exclusively Google. Same for shopping (not an American, so LLM shopping comparisons aren't very useful to me). Sadly, online reviews are still a cesspool.
For programming stuff that can be immediately verified LLMs are good. They also cover many cases where search engines can't go (e.g. "what was that song where X did Y?"). But looking up facts? Not yet. Burned many times and not trying it again until I hear something changed fundamentally.
The serendipity of doing search with your own eyes and brain on page 34 of the results cannot be understated. Web surfing is good and does things that curated results (ie, google's <400, bing's <900, kagi's <200, LLM's very limited single results) cannot.
It's extremely disheartening. I have no trust in Youtube staying accessible as a font of public knowledge. It just works out that way now.
Reddit seems hit or miss depending on the topic. Plenty of threads there where [deleted] asked a question and [disgruntled user] replied with something which has been replaced with random text by a fancy deletion tool.
Google wants to show me products to buy, which I'm almost never searching for, or they're "being super helpful" by removing/modifying my search terms, or they demonstrate that the decision makers simply don't care (or understand) what search is intended to accomplish for the user (ex: ever-present notices that there "aren't many results" for my search).
Recently tried to find a singer and song title based on lyrics. Google wouldn't present either of those, despite giving it the exact lyrics. ChatGPT gave me nonsense until I complained that it was giving me worse results than Google, at which point it gave me the correct singer but the wrong song, and then the correct song after pointing out that it was wrong about that.
Still can't get Google to do it unless my search is for the singer's name and song title, which is a bit late to the party.
Specific search expecting 1 answer. These type search is enhanced by ChatGPT. Google is losing here.
Wild goose chase / brainstorming. For this, I need a broad set of answers. I am looking for a radically different solution. Here, today's Google is inferior to the OG Google. That is for 2 reasons.
1. SEOs have screwed up the results. A famous culprit is pinterest and many other irrelevant site that fill the first couple of pages.
2. Self-sensoring & shadow banning. Banning of torrent sites, politically motivated manipulation. Though the topic I am searching is not political, there is some issue with the result. I can see the difference when I try the same in Bing or DuckDuckGo.
ddg is often faster for when I want to get to an actual web site and find up-to-date info, for "search as navigation".
llm's are often faster for finding answers to slightly vague questions (where you know you're going to have to burn at least as much climate on wading through blogspam and ads and videos-that-didn't-need-to-be-videos if you do a search).
When I need to search, I use a search engine and try to find a trustworthy source, assuming one is available.
I use gemini more on my phone, where I feel like going through search results and reading is more effort, but I'll fall back to searching on duck duck go fairly often.
On a desktop I generally start at duck duck go, and if it's not there, then I don't bother with AI. (I use copilot in my editor, and it's usually helpful, but not really "search").
I won't deny LLMs can be useful, but they're like the news: double-check and form your own conclusions.
Yes, I still use search engines and almost always find what I need in long form if I can’t figure it out on my own.
Given my time dedicated to researching thing, I feel like I am "more productive" b/c I waste less time.
But I do my due diligence to double-check what ChatGPT suggests. So if I ask ChatGPT to recommend a list of books, I double-check with Goodreads and Amazon reviews/ratings. Like that. I guess it's like having a pair-research-sesson with an AI librarian friend? I am not sure.
But I know that I am appreciative. Does anyone remember how bad chatbots were before the arrival of low-hanging-AI-fruits like generative AI? Intel remembers.
I echo what others say, Kagi is a joy to use and feels just like Google used to be - useful
I use perplexity pro + Claude a lot as well. Maybe too much but mostly for coding and conversations about technical topics.
It really depends on intent.
I have noticed that I’ve started reading a lot more. Lots of technical books in the iPad based on what I’m interested in at the moment.
These tools are useful, but in my view the level of trust seemingly commonly being placed in them far exceeds their capabilities. They’re not capable of distinguishing confidently worded but woefully incorrect reddit posts from well-verified authoritative pages which combined with their inclination for hallucinations and overeagerness to please the user makes them dangerous in an insidious way.
When I do, it's because either I can't think of good terms to use, and the LLM helps me figure out what I'm looking for, or I want to keep asking follow-up questions.
Even then, I probably use an LLM every other week at most.
Why would I want to have a conversation in a medium of ambiguity when I could quickly type in a few keywords instead? If we'd invented the former first, we'd build statues of whoever invented the latter.
Why would I want to use a search service that strips privacy by forcing me to be logged in and is following the Netflix model of giving away a service cheap now to get you to rely on it so much that you'll have no choice but to keep paying for it later when it's expensive and enshittified?
This can be very difficult, if there's a lot of semantic overlap with a more commonly-searched mainstream topic, or if the date-range-filtering is unreliable.
Sometimes I'll look for a recipe for banana bread or something, and searching "banana bread recipe" will get me to something acceptable. Then I just have to scroll down through 10 paragraphs of SEO exposition about how much everyone loves homemade banana bread.
Searching for suppliers for products that I want to buy is, ironically, extremely difficult.
I don't trust LLMs for any kind of factual information retrieval yet.
No, I don't use the hallucination machines to search, and I never will.
I use search engines to search. I use the "make shit up" machine when I want shit made up. Modern voice models are great for IVR menus and other similar tasks. Image generation models have entirely taken over from clipart when I want a meaningless image to represent an idea. LLMs are even fun to make up bogus news articles, boilerplate text to fill a template, etc. They're not search engines though and they can't replace search engines.
If I want to find real information I use a search engine to find primary sources containing the keywords I'm looking for, or well referenced secondary sources like Wikipedia which can lead me to primary sources.
But if I then click the Google search text box at the top, and start typing, it takes 20 seconds for my text to start appearing (the screen is clearly lagged by whatever Google is doing in the background), and then somehow it starts getting jumbled. Google is the only web page this happens to.
I actually like their results, they just don't want me to see their results. Weird business model.
If by syntax you mean "words related to what I want to find" then I suppose.
> youre fine with erroneous results.
Because AI is famously correct all the time. And there are no "erroneous" results in my method, only ones not closely enough related to what I wanted, so that's feedback my search terms should be refined.
Any answer I get from AI I’m going to have to verify anyway so I just skip the AI step.
yeah seems the same to me. type some stuff, read the response, evaluate, loop or break. cool you hate ai and dont want to learn a new pattern. but you say i dont want to think which doesnt make sense. you are thinking just as hard.
I don’t know what that means in any specific sense.
I don’t want to think about how to interact with the search mechanism. It’s the same problem with using voice assistants like Siri - I have to think about how to construct my query so that it will be parsed correctly.
With most search engines I can just type in disparate terms that might be related. The order doesn’t matter, the phrasing doesn’t matter, I don’t need to give it instructions on how to respond, etc.
Of course, I have used Phind and other LLMs, and the results sometimes are useful, but in general the information they give back feels like a summary written for the “Explain Like I'm Five” crowd, it just gives me more questions than answers, and frustrates me more than it helps me.
Where LLMs excel is when I don't know the exact search term to use for some particular concept. I ask the LLM about something, it answers with the right terms I can use in a search engine to find what I want, then I use these terms instead of my own words, and what I want is in the search results, in the first page.
The question is: are you searching for answers to something, or are you searching for a site/article/journal/whatever in order to consume the actual content? If you are searching for a page/article/journal/ in order to find an answer, then the journal/article itself was just a detour, if the LLM could give you the answer and you could trust it. But if you were looking for the page/article itself, not some piece of information IN the article then ChatGPT can (at best) give you the same URL google did, but 100x slower?
But a lot of my classic ADHD "let's dive into this rabbit hole" google sessions have definitely been replaced by AI deep searches like Perplexity. Instead of me going down a rabbit hole personally for all the random stuff that comes across my mind, I'll just let perplexity handle it and I come back a few minutes later and read whatever it came up with.
And sometimes, I don't even read that, and that's also fine. Just being able to hand that "task" off to an AI to handle it for me is very liberating in a way. I still get derailed a bit of course, but instead of losing half an hour, it's just a few seconds of typing out my question, and then getting back to what I've been doing.
The more you thrust the models, the less cognitive load you are spending checking and verifiefing which will lead to what people call ai but which actually is nothing more than a for loop over in memory loaded data. That those who still think that: for Μessage in messages... can represent any sort of intelligence actually has already brainwashed on a new itteration of the "one armed bandit" where you click regenerate indefinatly with a random seed being distracted from what is going on around you
Just now for example I wanted to know how Emma Goldman was deported despite being a US citizen. Or whether she was a citizen to begin with. If an LLM gave me an answer I for sure would not trust it to be factual.
My search was simple: Emma Goldman citizenship. I got a wikipedia article, claiming it was argued that her citizenship was considered void after her ex husband’s citizenship was revoked. Now I needed to confirm it from a different source and also find out why her ex’s citizenship was revoked. So I searched his name + citizenship and got an New Yorker article claiming it was revoked because of some falsified papers. Done
If an LLM told me that, I simply wouldn’t trust it and would need to search for it anyway.
https://kagi.com/search?q=how+Emma+Goldman+was+deported+desp...
But you're right, I'd have to check the sources cited before I'd trust the answer.
Hence, search still remains my hope until SO and the likes decay.
Additionally, many search engines now already generate quick summaries or result snippets without a lot of prompt-fu, hence LLMs have actually become 40:60(llm:search) ratio day to day.
Still have a trust issue with LLM/ChatGPT for facts. Maybe in a couple years my mindset will shift and trust LLM/chatgpt more.
But in fact I overwhelmingly use search over llm because it's an order of magnitude quicker (I also have google search's ai bobbins turned off by auto-using "web" instead of "all".)
I've used llm "for real" about 3 times in the last two months, twice to get a grounding in an area where I lacked any knowledge, so I could make better informed web searches, and once in a (failed) attempt to locate a piece of music where web search was unsuccessful.
I'd rank kagi > chatgpt > google any day.
I just tried ChatGPT and saw that you can ask it to search the web and also can see its sources now. I still remembered how it was last time I used it, where it specifically refused to link out to external sources (looks like they changed it around last November). That's a pretty good improvement for using it as search.
One interesting trend that I like is that I started using local LLMs way more in the last couple of months. They are good enough that I was able to cancel my personal ChatGPT subscription. Still using ChatGPT on the work machines since the company is paying it.
- I use RSS to see 'what's new', and to search it. My RSS client support search
- I maintain list of domains, so when I want to find particular place I check my list of domains (I can search domain title, description, etc.). I have 1 million of domains [0]
- If I want more precise information I try to google it
- I also may ask chatgpt
So in fact I am not using one tool to find information. I use many tools, and often narrowing it down to tools that most likely will have the answer.
[0] https://github.com/rumca-js/Internet-Places-Database
The biggest issue is when GPT returns something that doesn’t match your knowledge, experience, or intuition and you ask the “are you sure?” question, it seems to inevitably come back with “you’re right!”. But then why/how did it get it wrong the first time? Which one is actually true? So I go back to search (Kagi).
So for me, LLMs are about helping to process and collate large bodies information, but not final answers on their own.
On the flip side, any time I'm searching for something programming (FE, JavaScript in my case) it's last resort because an LLM is not giving me the answer I'm looking for.
This is still shocking to me, I really never thought I would replace my reliance on Google with something new.
Operator words still do work in google, albeit less so than in the past - they still do the job.
I see the AI as being there to do the major leg work. But the devil's in the details and we can't simply take their word that something is fact without scrutinizing the data.
I use Claude pretty exclusively, and GPT as a backup because GPT errors too much and tries to train on you too much and has a lackluster search feature. The web UIs are not these company’s priority, as they focus more on other offerings and API behavior. Which means any gripe will not be addressed and you have to just go for the differentiating UX.
For a second opinion from Claude, I use ChatGPT and Google pretty much the same amount. Raw google searches are just my glorified reddit search engine.
I also use offline LLM’S a lot. But my reliance on multimodal behavior brings me back to cloud offerings.
There is some room for optimism, though. There's been a rise in smaller search engines with different funding models that are more aligned with user needs. Kagi is the only one that comes to mind (I use it), but I'm sure there are others.
Though lately for more in-depth research I've been enjoying working with the LLM to have it do the searching for me and provide me links back to the sources.
That’s if they can swing the immense ads machine (and by that I mean the ads organisation not the tech) and point it at a new world and a different GTM strategy.
They still haven’t figured out how to properly incentivise content producers. A lazy way would be to display ads that the source websites would display alongside the summary or llm generated response and pass on any CPM to the source.
Keep in mind that I'm not counting in my 75% queries where I get my answer from Google Gemini I'm just guessing if you added that in, it would rise to 85-90%.
My thought is if browsers and phones started pushing queries over to an LLM, search (and search revenue) would virtually disappear.
For example I asked it about rear springs for a 3rd gen 4runner and it recommended springs for a 5th gen.
- Specific documentation
- Datasets
- Shopping items
- Product reviews
But for the search engines I use, their branded LLM response takes up half of the first page. So that 25% figure may actually be a lot smaller.
It's important to note that these search engine LLM responses are often ludicrously incorrect -- at least, in my experience. So now I'm in this weird phase where I visit Google and debate whether I need to enter search terms or some prompt engineering in the search box.
I was very surprised to hear this, and it made me wonder how much of traditional SEO will be bypassed through LLM search results. How do you leverage trying to get ranked by an LLM? Do you just provide real value? Or do you get featured on a platform like Chrome Extensions Store to improve your chances? I don't know, but it is fun to think about.
Learning is fun! Reading is good for you! Being spoon fed likely-inaccurate/incomplete info or unmaintainable code is not why i got into computers.
If I want to play with ideas, I chat with AI. If I need facts, I use search.
For the people who say they've reduced their search engine use by some large percentage, do you never need to find a particular document on the web or look for reference material?
Unlike Google, or Duck Duck Go, which serve up links that we can instantly judge are relevant to us, LLM spin stories that sound pretty good but may be and often are insidiously wrong. It’s too much effort to fact check them, so people don’t.
I use ChatGPT at home constantly, for history questions, symptoms of an illness, identification of a plant hiking, remembering a complex term or idea I can't articulate, tips for games, and this list goes on.
At work it's Copilot.
I've come to loathe and mock Google search and I can't be the only one.
Earlier today I was trying to remember the name of the lizard someone tweeted about seeing in a variety store. Google search yielded nothing. Gemini immediately gave me precise details of what I was talking about, it linked to web resources about it.
And yes, just plain old Google search is completely lackluster in comparison to the perplexity.ai search I get to do today.
Google is trying to be too clever and failing at that at the same time. I use it for some searches when I roughly know what I'm looking at. The longer the query the more likely it is I'll be using perplexity.
These are the things I usually search for:
* lazy spell check * links to resources/services * human-made content (e.g. reviews, suggestions, communities)
Genuinely curious - those who use chatbots regularly in lieu of search, what kinds of things are you prompting it for?
The only advantage Google and other traditional search engines have over AIs is that they're very fast. If I know for certain I can get what I want in under 1s I might as well use Google. For everything else, Perplexity or ChatGPT is going to be faster.
Exploratory/introductory/surface-level queries are the ones that get handed to auto-complete.
I like how Kagi lets me control whether AI should be involved by adding or omitting a question mark from my search query. Best of both worlds.
I'm still using Google for searches on Reddit these days because Reddit's own search engine is terrible.
I mostly use Perplexity for search, sometimes ChatGPT. Only when I am looking for something _very_ specific do I use a traditional search engine.
Dropping usage of search engines compounded by lack of support led to me cancelling my Kagi subscription and now I just stick with Google in the very rare occasions that I use a search engine at all. For a dozen searches or so a month, it wasn't worth it to keep paying for Kagi.
LLMs are amazing for technical research or getting a quick overview and a clear explanation without clicking through ten links. But for everyday searches — checking restaurant hours, finding recent news, digging into niche forums, or comparing product — search engines are still way better.
I don’t think it’s a matter of one replacing the other — they serve different purposes.
But I appreciate and read the Google Gemini AI generated response at the top of the page.
Also, I'm an iPhone user. But I have a Google Pixel phone for deve work.
I find myself now using 'Hey Google' a lot more because of the Gemini responses.
It's particularly fun playing with it with the kids on road trips as we ask it weird questions, and get it to reply in olde english, or speak backwards in French and so on!
We're in a bubble here.
[1] - https://www.reddit.com/r/options/s/Wr0acVKlFI
Here's the difference as per chatgpt search https://chatgpt.com/share/67f5ae28-5700-800d-b241-386462a307...
I used to use DDG for syntax problems (so many programming languages....) and it usually sent me to SO.
Now I use DeepSeek. Much friendlier, I can ask it stupid questions without getting shut down by the wankers on SO. Very good
I still use DDG to interface with current events and/or history. For history DDG is primarily, not only, an interface to Wikipedia
https://chromewebstore.google.com/detail/comparative-chatgpt...
I feel like the google search will become obsolete in a short time and they have to make big changes to their UX and search engine.
Although I guess most of its user base are still relying on the old ways so changing it right now has huge impacts on older users.
For instance I wanted help cooking Coq au vin yesterday. I’ve cooked it before but I couldn’t remember what temperature to set the oven to. I read about five recipes (which were all wildly different) and choose the one that best suited the ingredients and quantities I was already using.
I asked chat gpt for a coq au vin recipe, and I’ll just say I won’t be opening a restaurant using ChatGPT as my sous chef anytime soon.
I can only really validate the generated response when it's code. Usually on other stuff, I trust and read the response which is not good I guess.
Hope you were satisfied with the food at the end :)
Websites have all kinds of extra context and links to other stuff on them. If I want to learn/discover stuff then they are still the best place to go.
For simple informational questions, all of that extra context is noise; asking gpt "what's the name of that cpp function that does xyz" is much faster than having to skim over several search results, click one, wait for 100 JavaScript libraries to load, click no on a cookies popup and then actually read the page to find the information only to realise the post is 15 years old and no longer relevant.
There are times where I know exactly what website to go to and where information is on that site and so I prefer that over AI. DDGs bangs are excellent for this: "!cpp std::string" and you are there.
Then there's the verifiability thing. Most information I am searching for is code which is trivial to verify: sometimes AI hallucinates a function but the compiler immediately tells me this and the end result is I've wasted 30 seconds which is more than offset by the time saved not scrolling through search.
Examples of things that aren't easy to verify: when's this deprecated function going to be removed, how mature is tool xyz.
Of course, there's also questions about things that happened after the AI's knowledge cutoff date. I know there are some that can access the internet now but I don't think any are free
I'd also happily turn off several other search features, more directly tied to revenue, which is probably why they don't like adding options. I'm sure their AI will be selling products soon enough. Got to make those billions spent back somehow.
This constrains the search space to whatever training data set used for the LLM. A commercial search engine includes resources outside this data set.
Using a search engine for responses to natural language questions is of dubious value as that is not their intended purpose.
Until the false results rate drops, it can't be trusted.
The more times goes by, the more I use both ChatGPT and Claude to search (at the same time, to cross-check the results) with Kagi used to either check the results when I know strictly nothing of the subject or for specific searches (restaurants, movie showings…).
I’ve almost completely stopped using Google.
If you go for the highest tier subscription on kagi, you get https://kagi.com/assistant which gives you a huge swath of AI models to handle your searching.
I use LLMs for things where an explanation where accuracy ranging between 0% to 100% is not a problem. When I need to get a feel for something, a pointer to some resource
I use ChatGPT for learning about topics I don't know much about. For example, I could spend 15 minutes reading wikipedia, or I could ask it to use Wikipedia and summarize for me.
The most important part for me is understanding how to communicate with each system, whether it's google-fu or prompting.
Having said that, I use ChatGPT exactly like a search engine. If I want to find info I will explicitly enable the web search mode and usually just read the sources, not the actual summary provided by the LLM.
Why do this? I find if I don't quite know the exact term I am looking for I can describe my problem/situation and let ChatGPT make the relevant searches on my behalf (and presumably also do some kind of embedding lookup).
This is particularly useful in new domains, e.g. I've been helping my wife do some legal research and I can explain my layman's understanding of a situation and ask for legal references, and sure enough it will produce cases and/or gov.uk sources that I can check. She has been impressed enough to buy a subscription.
I have also noticed that my years (decades!) of search engine skills have atrophied quicker than expected. I find myself typing into Google as I would to ChatGPT, in a much more human way, then catch myself and realise I can actually write much more tersely (and use, e.g. site:).
* adult cat sleep time -> search engines. * my cat drops his toy into his water and brings it to me -> GPT
- What other people think of product XYZ: reddit - Subject specific/Historical: Wikipedia - News specific: My favored news sources - Coding related: I start with ChatGPT. To validate those answers I use Google
Besides, Google has some convenient features that I frequently use, e.g., currency/unit/timezone conversion, stock chart.
It will also help get rid of the antitrust issues that the chrome browser has created
They can be very useful, especially when looking for something closely adjacent to a popular topic, but you got to check carefully what they say.
Personally, I don't want an LLM synthesized result to a query. I want to read original source material on websites, preferably written by experts, in the field in which my search is targeted.
What I find in serious regression in search, is interpretation of the search query. If I search for something like "Systems containing A but not B" I just get results that contain the words A and B. The logical semantics of asking for "not B" is completely ignored. Using "-B" doesn't work, since many discussions of something that doesn't have B, will mention the word B. These errors didn't seem to be so egregious historically. There seemed to be more correct semantic interpretation of the query.
I don't know if this has to do with applying LLMs in the backend of search, but if LLMs could more accurately interpret what I'm asking for, then I would be happy to have them parse my queries and return links that meet my query specifications more accurately.
But again, I don't want a synthesized result, I want to read original source material. I see the push to make everything LLM synthesized prose, to be just another attempt to put "Big Tech" between me and the info I'm trying to access.
Just link me to the original info please...
p.s. Something like the "semantic web" which would eliminate any 3rd party search agent completely would be the ideal solution.
Like I could interrogate an LLM about something technical “X” or I could just search “X documentation” and get to the ground truth.
Our projects heavily use platform tools so I am looking there rather than Googling.
I started using Kagi in an attempt to de-googlify, but it turns out that it's just downright good and now I prefer it.
If I need something more complex like programming, talk therapy, or finding new music then I’ll hop on over to Chat.
On the other hand, Google search is starting to be useless without curating my queries. And their AI suggestions are full of lies.
For everything else, I still use search.
I use Kagi as my search engine and GitHub code search for searching for code examples.
I haven't found a reason to use AI yet.
I average around 1400-1600 searches per month.
Twitter and reddit are garbage.
I sometimes use youtube search then fast forward with the subs on and the sound off.
The internet has ended. It's been a fun ride, thanks everyone.
6510 slaps hn with a large trout
I often search for solutions for some specific (often exotic) problems and LLMs are not best to handle them.
DDG does not have best results I'm not sure if those are better than those from Google. Definitely have different set of issues.
Finally seeing another positive comment at HN about Kagi I decided to pull the wallet and try it. And it's great. It feels like Google from 2000s
I decided to replace my subscription to anthropic and chatgpt with Kagi where I have access to both providers and also Gemini, meta and others. So in bottom line I it actually saving me money.
Their Ki assistant (LLM that iterate with multiple search queries when looking for answers) is actually neat. In general it best of both worlds. depending what do you need you can use LLM interface or classic search and they have both working great
What I tend to use LLMs for is rubber ducking or the opening of research on a topic.
Boils down to the fact that the internet is full of shitty blogspam that search happily returns if your question is vague.
It is easy to filter them when you working with familiar domain, but trying to learn something completely new - it is better to ask DeepSeek for a summary, and then decide what to explore.
Sparktoro (no affilitation) had a post or video about this somewhere very recently.
Until when, I don't know.
LLM is okay for some use cases, but the amount of times it hallucinates bullshit makes it not trustworthy.
If some AI answers I'm not sure or suspicious AI crafted it, I'll search it for cross validation.
but I will stay I have started to just use the AI summary at the top of Google though although it is wrong like I searched "why is the nose of a supra so long" and it started talking about people's faces vs. the car which granted yeah it's not a nose but yeah
With LLM being good enough, I go to LLM for what I used to go for Wikipedia and StackOverflow.
No, just joking. I use libraries to read books.
Perplexity for anything complex
Yandex for pics (Google pics got ridiculously bad)
I think this also stems from a new design paradigm emerging in the search domain of tech. The content results and conversational answers are merging – be it Google or your Algolia search within your documentation, a hybrid model is on the rise.
I usually search for specific terms, often in quotes. My extra terms are variations on how people might word the question or answer.
Over time, I notice many sites are reliable for specific topics. I'll use site: operator to limit the search to them initially. If it's a research paper, adding "paper" and PDF usually links to it immediately. If government, it's often on a .gov page. And so on.
Search works well for me with these techniques for most of my needs. There has certainly even a drop in quality, with an increase in work, due to them optimizing for what generates ad clicks. That gives me a lot of sites that appear to be helpful but actually arent. I can usually spot and weed them out in one session for a given topic, though, since click farm sites are recognizable (intuitable) once you're used to them.
Finally, I try to follow the law since my Savior, Jesus Christ, requires it where possible. A.I.'s are usually trained with massive copyright infringement with outputs that may be copyright infringement. Search engines link me to the content creator to use directly. The creator also often says if they want it shared or how which I try to respect when I see it mentioned.
1. Bookmark manager. I can write "maven download sources", click on baeldung and copy&paste command from there. I did that 100 times and I'll do it 101-th time. I have plenty of webpages that I know they exist and I know how to find them. I'm too lazy to actually bookmark and organize them and Google works just fine for me.
2. Search for new knowledge in general domains. This category of queries I sometimes use ChatGPT for it, but not always. It's hard to formulate the rules, but I have a feeling which tool is more useful for given question. Sometimes it's just quicker to type few keywords in Google rather than asking full-fledged question to ChatGPT and wait for it to return full-fledged answer.
3. Search for new knowledge in arcane domains. For example I might need new info about my city, where to buy laptop, etc. ChatGPT might know about US, but its knowledge about Kazakhstan is definitely limited, shallow and outdated, so real web comes to rescue here.
4. Using Google-specific functions. For example I very often use queries like "site:bla.com something", because website-hosted searches in 100% cases are braindead and I wouldn't even bother with it.
For me, ChatGPT main function is to create new content. For example I might want to start new Go project, so I'll ask ChatGPT to write hello world golang tutorial for me, then ask follow-up questions like what identifier naming should I follow, etc. There are, of course, hundreds of similar articles in the Internet, but ChatGPT delivers tailored data much faster.
If I find that I really need to be more extensively descriptive, or to actually ask a question, I'll use an LLM, but that's pretty rare.
Here's an example of the latter that I did yesterday (this is Google Gemini):
"Hello! There is a poetic device where one line of a poem can be read both as the continuation of the previous line and as the first part of the following line. What is that called?"
"Hello! That poetic device is called an anastrophe or sometimes a hyperbaton, but these terms are more generally used for the inversion of the usual order of words or clauses in a sentence.
"While anastrophe or hyperbaton can create the effect you're describing, there isn't one single, widely recognized term specifically for a line that functions as both the end of one thought and the beginning of another in the way you've described.
"However, the effect you're describing is closely related to the idea of enjambment (also known as "run-on lines"). Enjambment is when a line of poetry ends without punctuation or a natural pause, and the thought continues onto the next line.
"..."
I proceeded to provide an example, saying "Points if you know what song this is ;-)", and Gemini confidently misidentified the song. (The correct song is "Black Cow" by Steely Dan; Gemini told me that the lyrics I'd quoted were from Bob Dylan's "Positively 4th Street," which doesn't contain any lyrics that even resemble what I'd quoted.)
For more general searches, depending on the topic, DDG is close to useless because link farms, AI slop, and returning results that aren't really what I'm looking for (some of the keywords weight too much). But I suspect this is a common problem in all search engines, so I'm not looking for a replacement. It is frustrating though. I can't believe the information doesn't exist, is just that it is unreachable.
I don't search using AI. Generally I'm not looking for information that can be distilled into an "answer"; and there's also that DDG is not feeding me AI answer (I think? May be I'm not paying attention).
1. No prompt about decline/accepting cookies every time I want to look something up.
2. No ads.
the results are mediocre the same way using google is.
I use an LLM to generated regular expressions.
The other day I was also searching for something dumb: how to hammer a nail into concrete.
Google will find me instructions for a hammer-drill... no I just have a regular hammer. There's a link from wikiHow, which is okay, but I feel like it hallucinates as much as AI. Actually I just opened the link and the first instruction involves a hammer drill too. The second one is what I wanted, more wordy than ChatGPT.
Google then shows YouTube which has a 6 minute video. Then reddit which has bad advice half the time. I'm an idiot searching for how to hammer nails into a wall. I do not have the skill level to know when it's BS. Reddit makes me think I need a hammer drill and a fastener. Quora is next and it's even worse. It says concrete nails bend when hit, which even I know is false. It also convinces me that I need safety equipment to hit a nail with a hammer.
I just want a checklist to know that I'm not forgetting anything. ChatGPT gives me an accurate 5-step plan and it went perfectly.
I have not been impressed by the results. In my experience, LLMs used this way generally output confident-sounding information but have one of two problems the majority of the time:
- The information is blatantly wrong, from a source that doesn't exist.
- The information is subtly wrong, and generated a predictive chain that doesn't exist from part of a source.
I have found them about on-par with the reliability of a straightforward Google search with no constraints, but that is more of a condemnation of how poor Google's modern performance as a search engine is, than an accolade for using LLMs for search.
As for those AI chatbots -those are anything but useful for the general search purposes beyond a bit of surface level answers which you can't fully trust because they (still) hallucinate a lot. I tell chatgpt - "Give me a list of good X. (And don't you make anything yup!!!)" - yeah with those bangs; and it still makes shit up.
Oh, and a major reason why Google sucks now? AI enshittification. They basically jettisoned their finely tuned algorithm in favor of "just run it through the LLM sausage grinder".
Liking it a lot.
Rest? Still search engines
AI is a better search for now because SEO and paid prioritization in search hasn't infested that ecosystem yet but it's only a matter of time.
I dropped Google search years ago but every engine is experiencing enshitification.
I'm very disappointed in Apple that changing the default browser in Safari requires you to install a Safari extension. Super lame stuff.
Which is kind of a problem, especially for Google, because their incentive to limit AI slop in search results is reduced when AI is one of their products, and they stand to benefit from search quality declining across the board in relation to AI answers.
On the other hand every time I've used language models to find information I've gotten back generic or incorrect text + "sources" that have nothing to do with my query.
For political stuff, I avoid wikipedia and just search engines in general and ask Grok/ChatGPT, specifying the specific biases I want it to filter out and know pieces of misinformation for it to ignore.
Gemini is similar.
I sometimes use phind and find myself jumping directly to the sources.
Consider paying for kagi.
Kagi is like Google in it's prime - fast, relevant and giving a range of results.
1. *Browsing*
This can be completely avoided. Here is a thing you can do on firefox with some tweaks in order to achieve no-search browsing
- Remove search suggestions in (about:preferences#search)
- Use the [History AutoDelete](https://addons.mozilla.org/en-US/firefox/addon/history-autod...) addon to remove searches from your History. This will avoid searches from your history to pollute the results
- Go to (about:config) and set `browser.urlbar.resultMenu.keyboardAccessible` to `false`
Now when you Ctrl + L into the tab, you will get results from your history, bookmarks and even open tabs. And the results are only a few Tab presses away, no need to move your hands off the keyboard.
If you don't like the results and want to launch a search anyways, well just press Enter instead and it will launch a search with the default search engine. A cool trick is to type % + space in the awesome bar to move around opened tabs/ You can also specifically look into bookmarks with * and history with ^
P.S : Ctrl + L, Ctrl + T, Ctrl + W and Ctrl + Alt + T are your best friends.
P.P.S: Now you can also learn more about custom search engines : https://askubuntu.com/a/1534489
2. *Quick answer* on a topic. This is the second most common use case and what Google has been trying to optimize for for a long time. Say you want to know how many people are there in Nepal or what is the actual percentage blue-eyed people in Germany. This is where llm shine I think but to be fair Google is just as good for this job.
3. *Finding resources* to work with. This one is a bit on the way out because, it's what people who want to understand want but we probably are few. This is valuable because those resources do not just give an answer but also provide the rationale/context/sources for the answer. But.
On the one hand, most people just want the answer, and most people can be you if, even though you deem yourself a curious person, you don't have the time right now to actually make the effort to understand. On the other hand, llms can craft tutorials and break down subjects for you which turn those resources much less valuable. I kind of feel like the writing is on the wall and the future for this use case is for "curating" search engines that will give you the best resources and won't be afraid to tell you "Nothing of value turned up" instead of giving you trash. Curious to hear your thougts about that.
Sadly search is massively enshitified by AI generated SEO'd crap...
Instead of clowning me or making me feel invalidated it would present an argument that covers both sides and would probably start with “JSPs have certain advantages, and I understand why you would feel that way. Here is a list of pros and cons…”