The OpenAi CEO, Sam Altman, presented a great vision for the future of Chatgpt at an Asa event organized by the VC Sequoia firm earlier this month.
When an assistant asked how Chatgpt can become more personalized, Altman replied that he wants the model to be documented and remember everything in a person’s life.
The ideal, he said, is a “very small reasoning model with a billion context tokens in which you put your life.”
“This model can reason in all its context and do it efficiently. And every conversation you have in your life, every book you have read, every email you have read, everything that has data is there, and in which you are given the dates are faced with the dates of Oet in which you face.
“Your company only does the same for all your company’s data,” he added.
Altman can have some reason based on data to think that this is the natural future of Chatgpt. In that same discussion, when they were asked for great forms of Chatgpt, he said: “People at the university use it as an operating system.” They load files, connect data sources and then use “complex indications” against that data.
In addition, with the chatgpt memory options, which can use previous chats and memorized facts as a context, he said that a trend that has not been noticed is that young people “really make life decisions without chatpt and.”
“Excessive gross simplification is: older people use chatgpt as a Google replacement,” he said. “People between 20 and 30 use it as a life advisor.”
It is not a great jump to see how Chatgpt could become an AI system that knows everything. Together with the agents, the valley is currently building, it is an exciting future to think about.
Imagine your AI automatically programming the oil changes in your car and reminding you; Plan the necessary trip for a wedding outside the city and order the registration gift; Or before the claim, the next volume of the series of books you have read on the legs for years.
But the terrifying part? How much should we trust a large company for profits of technology to know everything about our lives? These are companies that are not always used in a model.
Google, which was life with the motto “Don’t Be Evil”, lost a lawsuit in the United States that accused him of participating in anti -competence and monopoly behavior.
Chatbots can be trained for the response in a motivated political way. Not only has it been found that Chinese bots meet China’s censorship requirements, but Xai’s Chatbot Grok this week was randomly discussing a “white genocide” South African when people asked him unrelated questions. The behavior, many noted and implicit of intentional manipulation of its response engine under its founder of South African, Elon Musk.
Last month, Chatgpt became so pleasant that it was frankly sycopharient. Users ask to share Bro screenshots by applauding decisions and problematic ideas, even dangerous. Altman responded quickly promising that the team had solved the adjustment that caused the problem.
The best and most reliable models still do direct things from time to time.
So, having an assistant of AI who knows could help our lives in a way that we can only start seeing. But given Big Tech’s long behavior history, that is also a mature situation for misuse.