> In a study of over 16,000 queries, measured against institutional benchmarks from McKinsey, Harvard, MIT, BCG, and others, we determined Perplexity Computer saved our internal teams $1.6M in labor costs and performed 3.25 years of work in only four weeks. And now we’re extending those same capabilities to other teams.
This is a wild statement that does not seem to be supported by any actual data.
What does it mean? Does clicking on a link counts as labor.
> What does it mean? Does clicking on a link counts as labor.
I think we might be seeing what happens when people are being paid too much to spend all day emailing each other and jockeying excel/gantt charts/org charts. Yeah for some definition of "work" I guarantee that a LLM could perform 3.25 years worth in four weeks.
> people are being paid too much to spend all day emailing each other
Hmm, this does not sound exactly right. Also, does anybody seriously think that communication is not work, or is not important? A number of really impactful things started from people emailing each other. (Hell, Linux kernel development is still much about people emailing patches each other.)
Some type of emailing is important, what most people do, however, is not. Same with meetings, calls etc. Most of it is filling the day so they don't get fired.
The problem with human labor is that, as an organization scales, the amount of work any individual in the system can do shrinks due to the coordination problem.
Coordination consumes a larger and larger amount of employee time to the point that, in the absolute largest organizations, the vast majority of employee time is internal coordination vs. actual improvement/selling of the customer offering.
So if you go from 100 employees to 1,000 employees, they can MAYBE do 4X the work. Not 10X like you'd think. And this effect gets even worse as you scale further.
So if an AI can do 10X more labor in a human day, and can coordinate instantaneously via a central context ledger (say a git repo), it doesn't just create 10X gains in productivity for large orgs. It creates a multiple of that 10X due to also removing the human coordination overhead.
Don't you think AI itself is something that adds coordination overhead? A 1000 strong team with AI agents will feel like 5000-person company where more than 30% are not even at exception level - i.e. they need to be pulled along.
This is why having less people and more agents actually makes sense but the coordination problem remains either way.
And you cannot escape it because it is simply mathematical.
The coordination problem absolutely can be escaped with technology, hence why productivity gains exist and why the economy grows and isn't a fixed pie over time.
Here's an easy non-AI example:
In the past, a 'computer' was literally a person [1]. If you needed to synthesize large amounts of data, you needed to split the task among a team of people writing things down and then a team of people to check their work after the fact and then a team of people to combine all the work and then a team to double-check the combined work.
Tasks that in the past would have taken a room full of people coordinating with pencils are absolutely done by 1 machine today (what we know as computers) that no longer needs to split that task and coordinate, which is exactly what will happen with 'agents' who can take on vastly more work per unit of time.
I'm perpetually cautious about wild tech claims myself, but if you watch the launch video, there are examples of how they could claim labor/cost savings.
For example, one task takes a document with data, charts, and metrics, and Perplexity Computer was tasked with creating a 10-page slide deck for a presentation. Prior to AI, that took human capital and labor costs.
I can't say whether the $1.6M in labor costs is legit or not, but these tools are not just clicking links in 2026.
You don't think there's cost/labor savings in making agents and workflows easier to use? I don't think your average back office employee is going to be setting up OpenClaw.
When I'm running a lot of model training workflows concurrently, I can spend a small but noticeable amount of my day just clicking through links to see current progress and logs of any errors. If an AI would be capable of understanding the relatively complex UI, at least enough to find the right links to click, it could make a status report that takes me 15 seconds to read, and from that alone would save $2000 of labor annually.
I think their numbers of $1.6M and 3.25 years is still probably a massive overestimate, but the order of magnitude seems plausible.
None of these companies are serious businesses. They are either researchers out of their depth in transforming their work into economically stable products or they are inept CEO scammers pretending that how you sell something is more important than what you sell.
If you stop paying this subscription, this living computer with the googly puppy eyes gets it. You wouldn't want anything bad to happen to your best friend, would you? soft whimpering sounds
> I just don't understand what's even intended by this.
I might be misinterpreting, but according to the landing page, this is the intention:
> Personal Computer gives Perplexity Computer and the Comet Assistant always-on, local access to your machine's files, apps, and sessions through a continuously running compact desktop.
> It's a persistent digital proxy of you. Controllable from any device, anywhere.
That being said, the grandeur and bombastic language also seems fitting for something less sinister, like an even worse version of MS Recall maybe? Combined with, let's say... agents!
That's it! You Personal Computer is your agent and not only may act on your behalf, it also communicates your preferences and intentions.
Does openclaw have a killer use yet? Ive not opted to use it yet becauase - while very impressive hype and capability - seems like a lot of risk/credits for not an insane gain.
Who in their right mind is going to blindly trust an AI like that? There wasn't any review of the numbers, or even a hint of a "sniff test" on the output of the AI?
Would a real person risk their reputation like that?
--
With regard to the attempted redefinition of a commonly used term, I'm reminded of Gretchen, from the Mean Girls, trying to redefine "Fetch!"[1]
This is that 2024 trope of "the AI is turning my 5 bullet points into a proper email to send." "And my AI is summarising all those long boring emails into bullet points!".
The slide deck won't be viewed by a human. It'll be read by the human's pet LLM and then summarised into 3 bullet points.
I love (read, hate) the trend of using Serif fonts and marketing material that pull on nostalgic vibes. Surely, AI has been revolutionary in its own regard, for better or worse. But, the more they go into 80/90s style advertising, the more the allure of it dies.
Could it just be a new trend? There are just two options in this case (serifs or no), so I’d expect it to flip back and forth sometimes.
The broader trend is pulling back a bit on “minimalism,” right? I think we hit peak (or valley?) minimalism already so I guess there’s only one way to go.
I do agree with you, there is a reverse in the minimalism trends (which I am incredibly happy to see).
However, in my opinion this specific typeface and aesthetic is been taken up by AI companies to harken back to the likes of the 1984 Macintosh ads and such...in an attempt to try and convey that "$(AI_PRODUCT) is just as revolutionary as the first desktop PCs".
Whatever happened to Preplexity? They were all the rage a year or two ago, and now I hear...nothing. Is the product still being used? Making money? Or just overtaken by the base LLMs it was relying on?
They have been receiving a lot of hate on Reddit for a few weeks, since they started mass canceling Pro accounts.
What seemed to be initially an attempt at preventing illegitimate accounts (aka those using coupons from grey market for instance) escalated into wavesof random accounts suspensions just for not having a credit card set, including legit ones that came as a package with ISP, bank accounts, etc.
It's still there. For Joe Shmoe, in terms of general purpose, ask it a question, LLM use, Perplexity is solidly in the following lineup, as I understand it:
- Perplexity: This one has been promoted on (insert general audience media skewing toward the older set) enough to be a household name still.
- ChatGPT: General people in some demographics (see immediately above) are averse to this, on account of negative publicity its parent company has received. (Still very strong popularity and positive sentiment in some demographics, though)
- Claude: Some semi-literates have glommed onto this one, possibly as a result of its more recent success among the developer set.
- Grok: People can be either for or against, based on how they feel about its owning company and its ownership; no more need be said
- Gemini: Again, if you are in the universe of its owning company (or decidedly not), the draw (or repulsion) can be strong here.
For general LLM use, the above are all about the same. To be clear, this is just me shooting from the hip for how each offering might be viewed. IMO, it's not a bad idea to submit the same input to each and see how they compare, if one is so inclined.
Funny how you didn't even mention MS Copilot, which many of my friends who work for big corporations seem to have been forced to use at work, and as a consequence, also for personal use.
The generic elevator music used for the demo video is highly representative of this whole concept: generic and derivative.
Seriously though, Perplexity, like most of the AI wrapper companies, seems unable to innovate much beyond the query-response chat paradigm. I don't understand why VCs continue to fund these ai-slop companies. I see a new company's advertisements on the NY subway every week, and they're all the same: Anthropic/Google/OpenAI resellers who are selling some UI wrapper (or at best a bespoke model worse than the flagships) on top of pretty basic prompt engineering or tools.
This is what happens when we invert the product-paradigm: we're not solving problems with technology, we're taking technology and applying it to problems.
I use AI every day, so I'm hardly a luddite, but this bubble is so ridiculous at this point. This perplexity product, more than any other so far, feels so representative of peak craze.
I'd be willing to bet that every wannabe CEO out there is spooging after seeing that demo. That's clearly the target market: The wantrepreneurs who would surely have their brilliant successful business if only they didn't have to hire a bunch of lazy employees to half-ass it! "If only I could just speak my vague ideas to my computer, and it could do all the hard work of building and running this business, I could just chill out, be an entrepreneur on Insta, and collect the revenue checks.
I need someone who can translate marketing to help me out here. All the other comments seem equally baffled as to what this is. This is clashing with my idea of a personal computer with an AI operating system. Did anyone figure out what chip it uses, if it's local only, does it have a screen or do I plug in peripherals?
They may not come after all the niche companies, but they definitely come after the most successful markets, especially those with low effort moats.
Same goes for relying on the Apple/Google app stores (ex - Apple literally got slapped in court for copying successful apps and then pushing their offering to the top of their stores... talk about wildly abusive behavior).
I may still choose to use AWS/GCP/Azure while trying to find product-market fit as an immature startup, but I'd look real, REAL hard at ditching them as soon as possible afterwards.
Unless you have particularly bursty workloads, they aren't even a good cost saving measure anymore.
Unfortunately the zombo.com domain expired recently, and whoever snatched it replaced the original audio with AI generated shit. Nothing is sacred anymore.
So basically a thin client where all the data is in the "AI cloud" and you are at the mercy of the mainframe provider. What again happened to "the network is the computer" Sun Microsystems?
...because this thing will go rogue faster than you can blink.
I swear, it's like nobody at the company even reads the slop they're generating or thinks about it for any amount of time. In what world is advertising a kill switch as one of its essential features a positive? It's basically admitting from the start that this is unreliable.
They replaced their production staff with clawbot, it's all part of the plan.
There's a sense of "early bitcoin" around clawbot and other agent frameworks. I think if you wait for another 2 years for it to mature, you'll have missed out as if you waited ten years after bitcoin began.
They're insecure and janky, sure, but on the other hand you've got millions of dollars of compute and tens of thousands of very motivated developers working on making them secure, reliable, and competent. There's something magical about AI that actually gets real work done while you're doing other things, and that's what Perplexity is probably hoping to sell.
Just need a reliable local model, though - AirLLM, other hacks allow you to run bigger models more slowly, so you can build out a completely API-free scheme to run pretty capable agents even without big GPUs.
Could be a Moravec's paradox thing - all these people are thinking that the solution looks enticingly within reach, but it might be an absolutely horribly complicated quagmire with no easy solution short of AGI. I'd bet on clawbots and agents being very secure and great to work with in the very near term, though.
Yea, but since they link to a page where they describe openclaw as “malware reading your text messages” I’m assuming they like to think of this as something more evolved
> So Perplexity's openclaw? Hopefully more secure?
Given the inherent unpredictability of LLMs, I'm not convinced that an openclaw-like system but with more security features bolted on top is really a positive in the sense that the false sense of absolute security probably outweighs whatever actual security has been added.
It is easier to understand that openclaw is definitely insecure.
Stop posting AI slop, especially slop pull requests like the one you made to OpenClaw. Learn the first thing about a project you want to monetize and make fake contributions to. For example, OpenClaw is overwhelmed with slop PRs and the author has talked about this a lot.
This is a wild statement that does not seem to be supported by any actual data.
What does it mean? Does clicking on a link counts as labor.
I think we might be seeing what happens when people are being paid too much to spend all day emailing each other and jockeying excel/gantt charts/org charts. Yeah for some definition of "work" I guarantee that a LLM could perform 3.25 years worth in four weeks.
> people are being paid too much to spend all day emailing each other
Hmm, this does not sound exactly right. Also, does anybody seriously think that communication is not work, or is not important? A number of really impactful things started from people emailing each other. (Hell, Linux kernel development is still much about people emailing patches each other.)
Coordination consumes a larger and larger amount of employee time to the point that, in the absolute largest organizations, the vast majority of employee time is internal coordination vs. actual improvement/selling of the customer offering.
So if you go from 100 employees to 1,000 employees, they can MAYBE do 4X the work. Not 10X like you'd think. And this effect gets even worse as you scale further.
So if an AI can do 10X more labor in a human day, and can coordinate instantaneously via a central context ledger (say a git repo), it doesn't just create 10X gains in productivity for large orgs. It creates a multiple of that 10X due to also removing the human coordination overhead.
This is why having less people and more agents actually makes sense but the coordination problem remains either way.
And you cannot escape it because it is simply mathematical.
Here's an easy non-AI example:
In the past, a 'computer' was literally a person [1]. If you needed to synthesize large amounts of data, you needed to split the task among a team of people writing things down and then a team of people to check their work after the fact and then a team of people to combine all the work and then a team to double-check the combined work.
Tasks that in the past would have taken a room full of people coordinating with pencils are absolutely done by 1 machine today (what we know as computers) that no longer needs to split that task and coordinate, which is exactly what will happen with 'agents' who can take on vastly more work per unit of time.
[1] https://en.wikipedia.org/wiki/Computer_(occupation)
For example, one task takes a document with data, charts, and metrics, and Perplexity Computer was tasked with creating a 10-page slide deck for a presentation. Prior to AI, that took human capital and labor costs.
I can't say whether the $1.6M in labor costs is legit or not, but these tools are not just clicking links in 2026.
send me the data and ill ask my own AI to do it in my favorite silly voice.
I want to know pre-"personal computer by perplexity"
I think their numbers of $1.6M and 3.25 years is still probably a massive overestimate, but the order of magnitude seems plausible.
I would be willing to try this new product of theirs, but definitely on a secondary computer (i.e. not main system).
Do I have to sign up to install their version of an OS/openclaw?
What does this mean? The computer isn't alive. It's physically located on my person? Phones and watches have already cracked this.
If I say "Bob lives with me", that just mean that they generally share a residence with me. Desktop PCs already do that.
I just don't understand what's even intended by this.
But they want you to think of it as alive. They're anthropomorphizing it.
I might be misinterpreting, but according to the landing page, this is the intention:
> Personal Computer gives Perplexity Computer and the Comet Assistant always-on, local access to your machine's files, apps, and sessions through a continuously running compact desktop.
> It's a persistent digital proxy of you. Controllable from any device, anywhere.
That being said, the grandeur and bombastic language also seems fitting for something less sinister, like an even worse version of MS Recall maybe? Combined with, let's say... agents!
That's it! You Personal Computer is your agent and not only may act on your behalf, it also communicates your preferences and intentions.
Futuristic, right?
>Personal Computer runs on a dedicated Mac mini that can run 24/7, connected to your local apps and Perplexity’s secure servers.
Would a real person risk their reputation like that?
--
With regard to the attempted redefinition of a commonly used term, I'm reminded of Gretchen, from the Mean Girls, trying to redefine "Fetch!"[1]
It's just not going to happen.
[1] https://www.imdb.com/title/tt0377092/quotes/
https://www.fastcompany.com/91497841/meta-superintelligence-...
… particularly with acts that have legal implications like … well, almost everything, but particularly communication with investors or board members.
If people can get slides or summaries by pushing a button, they don't need others to push the button for them.
The slide deck won't be viewed by a human. It'll be read by the human's pet LLM and then summarised into 3 bullet points.
Also this "system" just seems vulnerable af.
The broader trend is pulling back a bit on “minimalism,” right? I think we hit peak (or valley?) minimalism already so I guess there’s only one way to go.
However, in my opinion this specific typeface and aesthetic is been taken up by AI companies to harken back to the likes of the 1984 Macintosh ads and such...in an attempt to try and convey that "$(AI_PRODUCT) is just as revolutionary as the first desktop PCs".
Build everything, do anything, give AI all your data and thoughts and system access and it will give you the world!
I'm not surprised our own "roaring" 20s is seeing this shift.
- Perplexity: This one has been promoted on (insert general audience media skewing toward the older set) enough to be a household name still.
- ChatGPT: General people in some demographics (see immediately above) are averse to this, on account of negative publicity its parent company has received. (Still very strong popularity and positive sentiment in some demographics, though)
- Claude: Some semi-literates have glommed onto this one, possibly as a result of its more recent success among the developer set.
- Grok: People can be either for or against, based on how they feel about its owning company and its ownership; no more need be said
- Gemini: Again, if you are in the universe of its owning company (or decidedly not), the draw (or repulsion) can be strong here.
For general LLM use, the above are all about the same. To be clear, this is just me shooting from the hip for how each offering might be viewed. IMO, it's not a bad idea to submit the same input to each and see how they compare, if one is so inclined.
Seriously though, Perplexity, like most of the AI wrapper companies, seems unable to innovate much beyond the query-response chat paradigm. I don't understand why VCs continue to fund these ai-slop companies. I see a new company's advertisements on the NY subway every week, and they're all the same: Anthropic/Google/OpenAI resellers who are selling some UI wrapper (or at best a bespoke model worse than the flagships) on top of pretty basic prompt engineering or tools.
This is what happens when we invert the product-paradigm: we're not solving problems with technology, we're taking technology and applying it to problems.
I use AI every day, so I'm hardly a luddite, but this bubble is so ridiculous at this point. This perplexity product, more than any other so far, feels so representative of peak craze.
They may not come after all the niche companies, but they definitely come after the most successful markets, especially those with low effort moats.
Same goes for relying on the Apple/Google app stores (ex - Apple literally got slapped in court for copying successful apps and then pushing their offering to the top of their stores... talk about wildly abusive behavior).
I may still choose to use AWS/GCP/Azure while trying to find product-market fit as an immature startup, but I'd look real, REAL hard at ditching them as soon as possible afterwards.
Unless you have particularly bursty workloads, they aren't even a good cost saving measure anymore.
https://www.reuters.com/investigates/special-report/amazon-i...
I don't think I'm cut out for the modern world
I thought of zombo.com the other day and booted it up. There is maybe no other website that continues to bring me as much joy as zombocom
However you can still do anything at https://html5zombo.com/
No, it doesn't, because it's not alive.
...because this thing will go rogue faster than you can blink.
I swear, it's like nobody at the company even reads the slop they're generating or thinks about it for any amount of time. In what world is advertising a kill switch as one of its essential features a positive? It's basically admitting from the start that this is unreliable.
There's a sense of "early bitcoin" around clawbot and other agent frameworks. I think if you wait for another 2 years for it to mature, you'll have missed out as if you waited ten years after bitcoin began.
They're insecure and janky, sure, but on the other hand you've got millions of dollars of compute and tens of thousands of very motivated developers working on making them secure, reliable, and competent. There's something magical about AI that actually gets real work done while you're doing other things, and that's what Perplexity is probably hoping to sell.
Just need a reliable local model, though - AirLLM, other hacks allow you to run bigger models more slowly, so you can build out a completely API-free scheme to run pretty capable agents even without big GPUs.
Could be a Moravec's paradox thing - all these people are thinking that the solution looks enticingly within reach, but it might be an absolutely horribly complicated quagmire with no easy solution short of AGI. I'd bet on clawbots and agents being very secure and great to work with in the very near term, though.
https://www.perplexity.ai/hub/blog/everything-is-computer
They designed a program (copied OpenClaw) and called it a computer
>Depends on our SaaS
Pick one.
Given the inherent unpredictability of LLMs, I'm not convinced that an openclaw-like system but with more security features bolted on top is really a positive in the sense that the false sense of absolute security probably outweighs whatever actual security has been added.
It is easier to understand that openclaw is definitely insecure.
https://www.dailymotion.com/video/x9fza3s
Basing this concept on what we have today with LLMs is a call for chaos, unreliability and slop communication; at best.