T O P

  • By -

Plastic-Soup-4099

GPT-6 will blow GPT-5 out of the water!


NotRexGrossman

Don’t worry Teslas owners, full self driving is coming next year!


FjorgVanDerPlorg

Tech bros really took CEO bullshit to the next level. They didn't even have the right words for all the bullshit they came up with, so they had to invent a bunch of terms. Now we've got "disruptive innovation" when they mean "we made an app," and "pivoting" when they mean "our original idea failed because it was shit." They're not firing people, they're "rightsizing for optimal synergy." They don't have problems, they have "challenges" or "opportunities for growth." Their product isn't buggy, it's in "perpetual beta." And god forbid they admit they're just guessing - no, they're "iterating based on data-driven insights." It's one big game of bullshit bingo, but every word on the sheet is just different ways to say "despite all evidence to the contrary, you can still trust us - so please keep giving us money." At this rate, they'll need to invent a whole new language just to keep up with their own BS. "Blockchain-enhanced AI synergy" anyone? No-one moves goalposts better than these chuckleheads. So yeah I look forward to Sam Altman's ten to twenty year plan, to bring us to AGI in 2-3 years tops.


SherwoodBCool

“Unscheduled rapid disassembly” was my favorite.


ukezi

To be fair, rocket scientists speak and decades old.


sortofhappyish

Any day now we'll have sunglasses with HD and AI. Mop + Bucket with "an advanced LLM" etc..... it happened with HDTV. every fucking product. There were even "high definition" fucking potatoes at ASDA.


PercheMiPiaci

I agree with you on the BS they spew trying to make it sound positive. I remember hearing *rightsizing* at a small company town hall in the 90s when he said *we're not laying people off, we're rightsizing* -WTF


carleeto

So say Elon!


WTFwhatthehell

I kinda feel like there's a difference between companies. Elon constantly promises over the top shit in the vauge hope that someone working for him will figure it out in time and starts collecting money from customers. Altman, well... >GPT-4 will ‘leave people disappointed’ says OpenAI CEO https://techmonitor.ai/technology/ai-and-automation/gpt-4-openai-chatgpt-sam-altman Pretty sure he only promises things when they've got the tech demo working pretty solidly.


WhatsIsMyName

I'm no Elon enjoyer after these last few years, but I got my first ride in a Tesla with FSD the other day. It drove us 80 miles and my friend didn't have to take over except one time getting on the freeway, when the car thought it was on a small road adjacent to the on-ramp and only went 30 MPH lol. Luckily there was no traffic to impede and he had gotten used to this as it was by his house and anticipated that. Other than that it was a good ride tbh. Despite being a big tech nerd it still made me nervous, but I was impressed. The one thing that stood out to me is that the car does things very confidently. If it wants to change lanes and reads no one in the lane next to them, that thing fucking changes lanes like it is 100% sure its safe. I guess that makes sense. I don't know, I guess I expected the car to signal its intent and then go about the lane change very cautiously and slowly, in case it had made an error or something. Nah that thing just throws it's blinker on and fucking goes lol. This was in a relatively urban+suburban area though, so the streets would be well-mapped. And despite the good experience I don't think I'd be willing to drive like that everyday without still being locked in on the road, so it does kind of feel to me like it defeats the purpose. Having a kid I don't have the faith I'd need to let it drive my kid around yet. One of those level 3 LIDAR vehicles I probably would though. But they are quite expensive. Not trying to defend Tesla's or Elon or anything, was just my first time and it was an interesting experience. Hoping in the future they can either make LIDAR more affordable or someone else can develop a reasonably priced competitor product so there are options outside of Tesla.


nakabra

GPT-6 will blow humanity out of the history books.


SouthDoctor1046

GPT-6 will blow you.


throwaway_ghast

No, that's against "safety" protocols. Remember: sex bad, war good.


ntermation

I don't like this game, can we change it around a little?


Spiritual_Tennis_641

Dave, what are you doing Dave?!


Bananawamajama

No thats GPT-69


hillswalker87

you look like a good John.


well_its_a_secret

Gpt-69 will blow us while we blow it


Digomansaur

It'll be the best GPT they've *ever* made!


greatdrams23

GPT-7 will be even better, and GPT-8.


amakai

GPT 98 will be revolutionizing. Just don't upgrade to GPT Vista.


Baconinja13

Why was GPT-6 afraid of GPT-7?


AHSfav

Surpisingly, GPT-9 will suck while GPT-10 will be amazing


giminoshi

And 6mo after GPT-5 launch... > GPT-5 kind of sucks and is mildly embarrassing... suggests things even a llama wouldn't suggest. We're optimistic about GPT-6 which is still in early development but will launch tomorrow.


mortalhal

SA 2026: GPT6 is mid, GPT7 is “near AGI for sure guys” coming soon in a few weeks*


RedditPolluter

And if it doesn't they can just call it 5o to edge the hype some more.


Zed_or_AFK

Can water blow? That is a good question. Let me begin with answering it by telling about a little frog.


sanjosanjo

No, it's "7 minutes abs". 7 is the key number here. https://youtu.be/JB2di69FmhE


access153

Any word on 7? Can we expect generational improvement?


PercheMiPiaci

*The best ChatGPT we've ever made*


Fragrant-Hamster-325

And we think you’re going to love it!


DaedricApple

Technology will advance!


ocelot08

And we've decided to skip GPT-7 because it's already so good we just call it GPT-8! 


ForeverWandered

It probably will if they are still around by then. 4o is way better than 3.5.


mr_birkenblatt

Why are they even wasting their time working on gpt5 if they could be working on gpt12 instead?


Bananawamajama

But GPT-7 will make GPT-6 look like GPT-2.


SageLeaf1

6, the number of GPT


Kaizenno

I'm waiting for GPT-10 if we all make it that far.


codenigma

Don't forget about GPT-7 ;)


ketralnis

CEO of company says his product is pretty great and you should try it out


billythygoat

CEO says his current product sucks sometimes the next one should fix it but probably won’t


junkboxraider

Exactly the kind of thing a "Sam Altman" AI would make up.


stuaxo

I don't know, can we make an automation that is as much of a prick as the real thing ?


almo2001

What I hate is gpt refuses to say "no you can't"


marcodave

CEOs and directors love this!


greatdrams23

Gpt refuses to ask questions and clarify. I asked how to fix a wine rack to a wall. It told me your, but a real person would say: How big is the wine rack? What weight will it carry? What type of wall is it? Will it be resting on the floor?


SquidKid47

It's an LLM. It doesn't know what you're talking about or that there would even be a need to specify.


SyrioForel

That’s incorrect. There is a tool called Perplexity that does EXACTLY what OP says, it DOES ask clarifying questions before supplying answers via LLM. It is not unreasonable to ask OpenAI to incorporate this feature into their tool. There are too many uninformed people and luddites in this subreddit who parrot around this idea that LLMs are little more than autocomplete engines. That’s incorrect. It is also incorrect to suggest that LLM can “think”, that’s the only part you guys get right. But at the same time, in your rush to diminish the hype around LLMs, you get caught up in these ignorant games where you make false claims about how LLMs can’t do this or that, even when there are products already out there that CAN do those things.


SquidKid47

Perplexity's Copilot isn't part of an LLM.


PickledDildosSourSex

Yep. Doesn't even do that when I ask it to take me through clarifying questions if it needs to.


Fried_puri

It occasionally recommends against doing something, but will happily explain how to do it anyway. The thing is, it’s a LLM, not the morality police, so it doesn’t have any concept of “this thing is a bad idea to explain”. It just spits out warnings that accompany the instructions elsewhere on the data it was trained on.


Astigi

A proper GPT will be the one making Altman obsolete


octopod-reunion

Wouldn’t it be funny if he literally already has chat gpt do his job.  Write these statements to generate press. Give commands to his subordinates etc. 


ACCount82

That's what they are trying to build. OpenAI makes no secret in that their mission is the creation of AGI - a machine capable of making any human labor obsolete.


narwhal_breeder

Yes, that is the explicit goal.


PalebloodPervert

That’s because Large Language Models are not General Artificial Intelligence. They’re just a subset of AI that encompasses language tasks that leverage vast amounts of data and complex algorithms to achieve their capabilities. Each “version” should always be relatively better than its predecessors.


GenazaNL

Unless you give it shitty datasets


missed_sla

That's why I only train my AI on youtube and facebook comments, the best source of information in the world.


savage_slurpie

Add Snapchat news and you will have a more complete data set


mrpoopistan

I try every day on Reddit to enshittify the dataset.


fractalife

Let's be thankful that we haven't reached AGI, and it doesn't really seem like we're that close. We're not even ready for the issues that the LLMs are creating.


mrpoopistan

LLM has revealed only one issue: humans are too easily impressed by language as a measure of intelligence. This probably explains the historic success of cult leaders and con men. Oh look! There goes Sam Altman!


chig____bungus

Unless he's found a solution to the "stochastic parrot" problem, which would be an actual immense leap in AI, LLMs are a bubble waiting for a high profile mistake to wake up business leaders to reality and slam the brakes on the hype train. There are many neural models that are actually huge in their ability to perform pattern recognition with supernatural accuracy and speed, and it is going to revolutionise fields from medicine to law enforcement. But the hype is all around LLMs and those are smoke and mirrors. The best case scenario for those is to take deterministic/heuristic inputs and "naturalise" them into human sounding language, and for interpreting human language into machine commands. That's still really cool! But the idea of LLMs as some kind of companion or trusted source of information on their own is going to pop.


kranker

> Unless he's found a solution to the "stochastic parrot" problem Sam Altman is an investor/entrepreneur. He hasn't found a solution to shit.


yumtoastytoast

I hate it when people only give credits to some dickhead investors like Elmo, Bill Gates or Sam Altman for something that a lot of people put their effort into developing.


anynonus

I asked my AI what the "stochastic parrot" problem is and she said it's not a problem at all.


NoPriorThreat

sounds like my wife


TheTerrasque

She? Bet she gives great helmet


G_Morgan

It isn't possible to fix it with this model of AI. It is fundamental to how everything works.


entropythagorean

You can generalize LLM output as basically being a bell curve of the average of the information available on the internet, it does output some mistakes as you move toward the tail ends of the distribution but if you’re looking at the area under the curve where the average happens to be correct it’s still immensely helpful and valuable. I’m not sure why the assumption is that it needs to have near perfect accuracy to have any utility, it’s not even touted as such.


PartyClock

I've been using the term "Word calculator"


Studds_

Yes! Great term. That’s exactly how using them feels


AppleDane

"Sentence processor", like an advanced word processor.


Game-of-pwns

They use a shit ton of power and compute resource to not have near perfect accuracy. Once business realises the cost benefit isn't there and the VC money dries up, this bubble will pop.


makemisteaks

There’s also an issue of accountability for companies. If an employee makes a mistake I can fire him and keep my reputation intact. If an AI is embedded into my service in a way that I can’t extricate myself from its mistakes, then the whole brand trust comes tumbling down.


chig____bungus

Because these things are going to have a wide reach and influence. Children are going to grow up learning from them. Would you accept if Sesame Street started teaching letters that didn't exist? Would you accept an adult telling your kid it's safe to eat poison? Obviously not. So why would you accept an LLM stating falsehoods, often dangerous ones, as fact?


ShiraCheshire

Okay, but that's not a problem we can solve currently, and not what the technology is made for. You might as well ask why cars aren't safer for children to drive- some children will try, wont they? Why isn't this food from the grocery store cut into little pieces when toddlers will inevitably try to eat it? Our focus shouldn't be on "We need to make this LLM child friendly." It should be on "We need to make sure that people are aware these are language models and not truth machines, and try to ensure no one takes them seriously as a learning tool ever."


Jsn7821

Also children learn from adults who happen to be wrong about stuff all the time! Probably far more frequently and confidently than the top LLMs will ever be


therockhound

Quite a shock when I looked into the evidence and realized what parents taught me was wrong: the earth is not 6000 years old.


Wookimonster

Because in a lot of ways it's sold as not having these issues. Like when that LLM gave an airline passenger a refund it wasn't supposed to. LLMs are being sold as working solutions to problem, like Tesla "Full Self driving" which apparently attempts to self destruct at random, by driving into trains or whatever. The sales pitch is that more and more can be replaced by AI and work reliably, but it doesn't work relia, it just works most of the time. The problem is that when it's deployed en masse, these cases will crop up more and more. If AI salesmen actually addressed this by saying "this is a useful backup feature and tool, but you gotta fact check what it puts out", it would be a different story.


Nodan_Turtle

It doesn't need perfect accuracy to have **any** utility. But the mistakes it makes today can be deadly. It'll never reach AGI and the insane economic boons that can reap. Plus, the imperfect accuracy also limits where it'd be implemented with the capabilities it does have. Imagine trying to sell the military a new gun that most of the time shoots at the enemy, but sometimes will fire back at friendlies. Then explaining to them it's really accurate most of the time.


Time_Mongoose_

How do you know where a given piece of information falls on that bell curve?


mrpoopistan

The problem with LLMs is that the median, modal and mean answers are all different for the typical curves we see in nature. Depending on the available dataset and the quirks of the question at hand, this can lead to radically incorrect responses. The problem is that most of the time, an LLM will do just fine because generic questions deserve generic answers. Long tails, edge cases and breaking developments are where the butchery happens, though.


Ylsid

Yup, the misuse drives the hype.


NeillMcAttack

They are incredibly valuable learning tools. The ability for the system to comprehend the most basic of language inputs and give correlated reasonable outputs brings a level of access to information never before seen. It may not, currently, help individuals in the top field advance much. But for the layman to be able to convey questions to a system not prone to limitations of a human educator, you must see as a monumental shift. I can ask questions about coding, languages, history etc. and get back a better understanding as a layman, immediately, at any time of day, with minimal effort, with little background of the subject, and receive an answer tailored to my level of understanding thanks to the system correlating my use language to my general current understanding. LLM’s are here to stay!


Temp_84847399

I've read through several papers where they tested how a specifically trained LLM can assist workers with various tasks. They all showed a significant narrowing of the differences in output between novices and experts. That's the kind of thing that may not seriously affect the overall unemployment rate, but it can definitely lower salaries by opening up knowledge type jobs to a much larger applicant pool.


NeillMcAttack

I’ve learned more about how to effectively use excel by asking ChatGPT in a few weeks, than I learned on two paid courses in my career. The fact it can take my terrible terminology and make semantic relationships is game changing. As a simple example, I could refer to “each little box” and the system understands context and knows I am referring to a “cell”, it makes learning very intuitive.


Temp_84847399

Yeah, it's huge leap over other training materials. I've used it to teach me about Docker. And that's still using a general purpose LLM. We haven't really gotten to the point of very specialized ones that trade broad functions for accuracy and consistency. Anyone saying "AGI in 6 months!", is likely way off base, but equally so are the ones sticking their heads in the sand saying that AI will be forgotten in a year.


axck

kiss angle squeal exultant ancient heavy somber hospital school dinner *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


mrpoopistan

That's unfair. The bubble might deflate when the next thing comes along. After all, half the hype of OpenAI is just Nvidia finding a bridge to the thing after crypto.


SherwoodBCool

“I know up to now everything has been bullshit. But this *next* version is gonna be great, I swear!”


mymemesnow

>”I know up to now everything has been bullshit” They revolutionized the field of AI and changed the world forever with ChatGPT. You can say what you want about OpenAI, but you have to admit that they’ve accomplished much more than most companies ever will.


sage6paths

Anyone else remember IBM's Watson? Just saying.


allknowerofknowing

Difference is hundreds of millions use online LLMs. You're kidding yourself if you think this technology isn't very useful already


ecstaticex

Think about the carbon footprint of billions of people using LLM’s for computing


allknowerofknowing

I mean that's something to consider, but the point of my comment was that it is useful and available to lots of people compared to IBM watson


ecstaticex

Is it useful? Absolutely. Is it being used effectively? Not even close. There is a reason that Watson was never given to the public. There is also a reason why regulation on publically available LLMs needs to be enacted so that we aren't wasting resources on stupidly frivolous matters.


allknowerofknowing

It is being used effectively by people like software engineers. It is a great productivity tool


gold_rush_doom

Nope. I'm fairly certain Google had something like this in their labs but they didn't release it because of how bad the results are. They didn't think people would be ok with non factual replies. They didn't think there would be a market for this except grifters. Google does AI much better than Open ai, but now they will put their resources into better speech synthesis, or spam caller detection, but into the grifters paradise, Bitcoin 3.0 that is LLMs.


wrongtake

Nah, Google never has something in the size of GPT 3.5 or 4. Their PALM model was useless. The only good thing about Gemini 1.5 is that it's multimodal. At least OpenAI forced Google to release something. Google will just ship another blog post and a year later will kill it. There AI search summarization is also an embarrassment, perplexity had a better search from Day 1 and it was a tiny startup at time


Right-Wrongdoer-8595

It's crazy how both sides of this debate have developed such a pointed narrative.


SherwoodBCool

“Changed the world forever?” I guess they did make it easier for misinformation to spread.


WhoDat-2-8-3

"Best iphone yet"


iim7_V6_IM7_vim7

Calling it “bullshit” is a bit much I think


alstegma

[ChatGPT is bullshit](https://link.springer.com/article/10.1007/s10676-024-09775-5)


SpasmaCuckold

I've just read (scanned) that paper... It is outstanding analysis! Pragmatic and pretty close to the truth. Bullshit, indeed!


jax362

Alternate headline: “Snake oil salesman touts next generation of snake oil.”


stoogs

guys a grifter for sure


AdvertisingNatural36

Sam Altman is a piece of shit human being who comes off as a Saturday morning cartoon villain - except in this case this POS could bring the world to its knees. Hope it was worth it Sam


lambertb

Lying liar says what?


Drakonx1

Thanks Elon Jr.


1smoothcriminal

i don't trust this guy one bit .. i can't be the only one.


throwaway92715

A silicon valley data mogul? What's not to trust?


bz386

Of course he does. It’s straight out of the Elon Musk playbook. Empty bullshit promises and vaporware.


getSome010

Man I don’t get it I’m using GPT 4.0 for work and it can’t even perform simple tasks? Why do people think AI is so incredible still


Bacon_00

Because big tech put big bucks into it and they are desperate to make it "a thing."


Trollzore

It’s one of the most incredibly powerful tools available recently compared to Apple’s Siri. Ya’ll expect teleportation yesterday.


Aourijens

I can say the complete opposite. I used 4o to create a discord bot that connects to open ai so I can use gpt at work since it was blocked… guess what helped me do that. I have zero experiences with discord bots and it took me maybe 2-3 hours to set it up and have it working where it opens a private thread to chat. Maybe you need to prompt in a different way.


gold_rush_doom

It works for things which people are doing right now on the internet. But I asked it on how to write something for Windows 10 Mobile, a platform which doesn't exist anymore, and it gave me suggestions for Windows 10. It's not magic, it's as dumb as a search engine.


ShadowbanRevival

So you asked it to write something for you for a platform that doesn't exist anymore and that shows you that it's dumb? The other guy just said that he had no experience making Bots and he made one in two to three hours using just prompting seems like a big leap to say that it's it couldn't help you develop for an obsolete piece of tech


Zealousideal_Gap7483

So you learned how to use an api key and create a discord bot which has fully built out client packages? That’s great but no offense this was 100% achievable by anyone in 2-3 hours prior to ChatGPT, even if you had no coding experience. To non technical people this SOUNDS like a huge feat, but it’s incredibly basic and easy to achieve. This is why I find so many of the AI doomers funny, they don’t know what they don’t know


NoPriorThreat

> it’s incredibly basic and easy to achieve If you know what to do, it is easy. If you dont than it is not that easy. Especially with an API/platform that you have no prior experience.


t-e-e-k-e-y

>To non technical people this SOUNDS like a huge feat, but it’s incredibly basic and easy to achieve. But isn't that the point? It makes it simple and achievable for non-technical people.


Lyriian

you honestly could have done exactly that with a youtube video and the same or less time. ChatGPT knows how to do that because there's literally thousands of examples of that exact thing on the internet.


Fit_Flower_8982

Because it is, depending on your use case. People seem to expect AGI, when we're still a long way from that.


mrpoopistan

Dotcom is the reason. Seriously, the fear in investing isn't pissing away money. It's missing the next thing. Even if you have to place 99 bad bets, you really, really, really want to make sure you have a bet in play when the next Amazon appears.


RoboNeko_V1-0

I'm still waiting for the release of the voice chat we saw in the demo.


Mindfucker223

Anhtropic with Claude shows more promise than OpenAI with GPT


SherwoodBCool

I just heard ai described as “digital mansplaining,” which is perfect. It takes superficial inputs and turns them into being confidently wrong on subjects it knows bugger-all about.


skillywilly56

I could live with never hearing or reading this man’s name ever again till his obituary.


ShowBoobsPls

Looking forward to more progress


ShadowBannedAugustus

I am shocked. I expected the CEO to say his next product will be crap. In other breaking news: Apple announces iphone 17 will be better than iphone 16.


Dull_Half_6107

CEO of a company: "Our next product is going to be shit actually" This has never happened


sir_duckingtale

You have exceptionally clever six year olds Most of them are dumb as bread Heck, Even I make way more mistakes than Chat-GPT


mdkubit

Will we ever see true artificial intelligence in our life time? Or, is it more likely we'll generate something that we can't distinguish from intelligence - in terms of interaction - but behind the scenes is still, at its heart, just a prioritized search engine based on data sets? Because now I'm wondering if we'll be able to create something indistinguishable in terms of interaction from us, but that has zero capacity for actual thought or reason or conceptualization of any kind; the ultimate chat bot that is indiscernible from a human. ...and at that point... will it even matter that it has no capacity for these things...? (Sorry, thinking like an author, imagining 'what ifs' of the future, etc).


ACCount82

There is a name for "something that is completely indistinguishable from intelligence". It's "intelligence".


Jamizon1

Sam Altman is a complete dipshit.


NortheastBound2024

GPT is garbage can’t wait for OpenAI to fail


iduddits2

Yeah besides the head trip that the ai images and that stuff is, I’ve not really been impressed with the “intelligence” side of it. It just compiles already available info into a pretty standard format. And often it’s so factually wrong.


AcademicMaybe8775

the proliferation of AI 'news' is getting rediculous too. its making finding actual information harder when you are looking for some specific information and you get pages that are clearly AI written nonsense


sickofthisshit

I see a bunch of YouTube videos that are some weird AI script being machine read over things like military footage. I don't get why there is so much traffic in "military machine go brrrt" but combined with AI making bizzaro pictures of impossible large planes and trucks, parts of the internet are like a Popular Mechanics magazine on acid.


Zephyr4813

The sentiment here is weirdly wanting it to fail? I truly don't understand. It seems odd to want a free AI tool to fail rather than become very helpful for all tasks.


Bontacha

i think most people here just dont like sam altman


crotte-molle3

because this sub is dumb as shit


Zephyr4813

It really is. Look at the replies by the user who made that comment and his user profile altogether. I'm surprised and disappointed that users in a tech subreddit would be so idiotic


[deleted]

[удалено]


futebollounge

I think you’re hallucinating that statement


Agreeable-Bee-1618

few months ago techbros would get super angry at anyone saying AI is a bubble and most AI tools are pretty useless and now everyone agrees they're not nearly as good as everyone thought


swiftgruve

AI is strange in that as useful as it can be, most people I know hope it fails to live up to the hype and we humans can stay relevant.


Kyouhen

Humans will always be relevant because LLMs aren't very useful. They lack the ability to understand what they're saying or even the context of the information they're regurgitating. They run little mathematical calculations to figure out what the most likely response we'd expect is and give it to us. It's all pattern recognition with zero ability to think about the data. Best recent example I've seen is that it lacks the ability to recognize what letters are used in a word. They can't break down what they're saying into smaller parts, they just repeat what other people have already said. Honestly we should stop calling them AI and only call them LLMs. AI is extremely misleading and just an attempt to build hype using a term we're excited to see. These are nothing more than glorified chat bots.


allknowerofknowing

LLMs are 100% AI, for some reason people have an idea of AI being conscious humanoid robots or something for it to be considered AI.


jan04pl

I think the misconception comes because of people meaning AGI when they write AI in most of the cases.  LLMs most certainly are AI. A Roomba cleaning a room and not running into obstacles also is AI. They are however not in a million miles anything resembling AGI. That's what OP probably meant.


Walgreens_Security

I’m sorry Mother Nature. Sorry to planet Earth.


Death-by-Fugu

So sick of hearing about this sycophantic liar


furezasan

I'm starting to think AIs going batshit insane is a feature of LLMS and we'll need new tech, not more of the same in order to improve on that.


Tavrin

Speak less and deliver more 🤷


Fact-Adept

What’s the best local LLM that can perform as well as chatGPT and other commercial ones?


Aquirox

There is a good difference between GPT-4o and GPT4 What GPT4-o lacks is a sandbox but there is a risk that it will come out. ^^ If we give GPT the right to book a plane, a hotel, order water, etc., it's the next level that's scary. GPT is castrated and restrained people forget it.


djdefekt

I'm holding out for GPT-X. Only making mistakes a 9 year old would make!


yaqubkofi

Sounds like my old weed man. “this shit is way better than the last batch, trust me bro”


Whatever4M

Can't wait. Chatgpt 4 was a huge leap from 3. Looking forward.


Tentomushi-Kai

Just wait for GPT-1000!


Saneless

They're taking the EA/2K playbook Version Y comes out: Version X is such shit. You won't believe how much better Y is A year later version Z comes out: Y had so many shitty flaws, you won't believe how much better Z is at these basic things we forgot to mention were broken last time


sdwvit

Still not general ai


Danjour

This guy is giving me Zuckerberg vibes


cmoz226

Paradigm change coming soon!


figgityfuck

CEO hypes up new product


TanguayX

It’s a race to see what appears human, ChatGPT, or Sam Altman. (GPT 3.5 won)


ACCount82

Years after every megacorp under the sun has started pouring billions into AI research and development, OpenAI still has the best tech out there. GPT-4o still outperforms anything anyone else came up with by a considerable degree. I have no doubt that GPT-5 is going to raise the bar of AI performance again, but it's curious to see by how much. If they can deliver another generational leap, like that between GPT-3.5 and GPT-4? That would be something.


throwaway69662

I highly doubt th at 5 will be significantly better than 4.


[deleted]

Sam Altman is a grifter who gives a shit what he says,


bittlelum

Head of company says new product will be good. News at 11!


NorthernCobraChicken

It's almost as if iterative improvements are generally better than their predecessor.


-The_Blazer-

> The CEO compares the development of its LLMs to the iPhone For the trillionth time: you are not going to hype or even tech your way into being 'the next iPhone'. It's unlikely we'll even know in advance what is 'the next iPhone', assuming 'the next iPhone' is even a repeatable event. I swear the iPhone just broke the R&D market. Instead of just making good useful products, now everyone is trying to be Steve Jobs in 2007 like there's some kind of magic formula we'll get to if you just try hard enough.


gagfam

The iPhone was more the next iPod than the first event of it's kind. Like sure no one thinks about it anymore but killing your greatest success and telling everyone to buy the big thing was insane at the time. Eventually apple will retire the iPhone and the world will change again in the blink of an eye.


McCool303

I mean if GPT-5 could blow me it would be a significant improvement over what it currently does.


used_bryn

So the free version will be GPT-4o?


blurnbabyblurn

“When we make a better version of this product, it will be a better version of this product.”


Total_Adept

“How many r’s are in the word strawberry”


the_taco_man_2

There's a common concept not just in tech but a whole lot of other industries known as the 80-20 rule. Also known as the "long tail" but very similar concept. Basically, it's fairly simple and straightforward to get to 80% efficiency / accuracy, because the vast majority of use cases fall into only a handful of scenarios. But getting that final 20% is almost impossible because all of those little edge cases splinter into thousands and thousands of potential outcomes. It'll always be the same thing for GPT. It'll handle the majority of queries just fine, but the edge cases will continue to break it, no matter how much they improve it.


thirteennineteen

It’s not human, what it does is not “intelligent”. Stop comparing it to a human.


predator8137

I don't know what's so wrong about his claim. Isn't it reasonable that an emerging technology with exponentially more resources thrown in it would also improve significantly? GPT2, 3, and 4 were all vastly improved upon their predecessor. Why won't GPT5?


supaxi

bigger number better, they just need to add some x’s too


Zealousideal-Poem601

that motherfucker has never said anything useful