Google I/O 2023 has come to an end and the company has shown off some exciting new hardware and exciting new features for Android devices; but the bulk of the presentation focused on how Google is integrating AI into all of its core services, from Maps to Gmail, from Docs to Photos.
Even its major Android device releases, the Google Pixel 7a, Google Pixel Tablet, and Google Pixel Fold, are not spared from ever-expanding AI integration, such as AI-crafted wallpapers and AI chat integration at your fingertips.
For those who are passionate about AI, it’s all gold and I have no doubt that Google will achieve much, if not all, of what it intends to do with its AI plans. But one of the things the Google speakers underlined is that they will be responsible for deploying AI in the company, and I call that bullshit.
The very premise of what Google and other companies like Open AI have done around AI is inherently irresponsible, to say the least, and damaging to human health and safety in many ways — and an upbeat, happy presentation about it can do nothing. do. do to erase this original sin.
The human connection was already fraying
As an older millennial, I have a very different point of view than my younger and older colleagues. My childhood years (I was born in 1981) were clearly pre-Internet. I grew up in Queens, New York and I was lucky enough to live in a place where I could play outside for hours as a kid and grow up without a lot of the pressure that the age of the internet puts on kids today.
For example, there was no need to always watch your step and act ideally or artificially, because there were a lot of moments in life, and although your embarrassing faux pas at school could single you out for ruthless ridicule in the moment, memories fade and youthful mistakes can be quietly packaged as a learning experience and nothing more.
The Internet, which came out when I was in high school, changed everything. Things on the internet could be permanent, which wasn’t there before, but we didn’t know that then. It wasn’t until social media came onto the scene at the end of the 2000s that the full implications of this persistence became apparent.
Every social media post can cause a social explosion and while I don’t think this is inherently bad (racist social media posts only show someone is a racist, like something they would be with social with or without networks, and what is actually useful to know), our approach to the world has undoubtedly changed.
In many ways, this has strengthened our images above ourselves, causing us to wear the masks we make for our public selves more often (which is normal and timeless), so that we inhabit these images more than before and interact with the characters. not people more than ever before.
And, since we currently live most of our lives on the Internet, this has inevitably had implications for the mental health of many people who are trying to adjust to a lifestyle that is not healthy for people who, by nature, are social animals who need real People. human connections to flourish.
Social media, by contrast, is replacing genuine human connection with constant character dialogue, which is just bad for us in the long run — and now Google seems poised to extend that same problem to other aspects of our daily lives. and declare it to be the future. For the sake of humanity, I hope it’s not.
Google, OpenAI and all the rest made an algorithm in the form of a person
Let’s start with the fact that Google as a business is in a stalemate. It’s not necessarily Google’s fault that Open AI and other companies have taken generative adversarial AI and released it into the world for profit. Google, in the style of the classic prisoner’s dilemma, is simply adjusting to the market.
It also doesn’t change the fact that this market must be destroyed for the sake of humanity. It is a marketplace created by people who founded tech startups and whose social interactions are only with other “elite” and often asocial tech founders and associated employees. It’s a marketplace made by people who feel they can just create an AI agent who will swipe for them on a dating app and choose the perfect match for them so they don’t have to go through the messy human cycle of excitement and frustration that comes with date.
Google is taking this approach across its entire line of productivity apps. AI will soon be able to email you for work based on the content of the email thread with just one or two hints. It can even read the content of an email thread and summarize the content into a summary so you don’t actually have to read what others have written. All those jokes between colleagues that help us rally at work, apparently, will not work.
I have a lot of experience with PR emails that come to me as if they were written by an impersonal AI, and let me warn you all that your inbox is about to turn into a hellhole. Mine will become unusable as the hundreds and possibly even thousands of AI-generated topics I don’t professionally cover drown out the dozen or so emails I get a week on topics I do. Whatever you do for work, the result will still be the same, and if this is not your experience yet, my condolences for what is to come.
Plus, there’s Google Docs, Sheets, and Slides where you can test your creativity and personality with prompts, and let Google’s AI just do it for you. As a starting point, of course.
And of course, no one is going to put so much pressure on you at work in the future that you won’t end up cutting corners and just quickly get Google to draft some work product for you because you don’t have time for it. . really take that starting point and fill it with your intelligence, personality and dedication.
And it definitely won’t be a problem when your boss just tells you to use Google AI to create your work, because it’s faster and “good enough” for now (and currently ends up every time), sucking up any fun. you quit your job. However, if you hate your job, maybe Google understands this and gives you the tools you need to just mail them at work. However, your boss will soon start to wonder why they pay you at all when the AI can do your job perfectly.
Don’t want to write an essay for school and actually learn something along the way? Don’t worry, Google Tailwind can do your schoolwork for you; just give him some sites to steal the source material and submit his work.
Your teacher doesn’t have time to carefully check your work anyway, as teachers are already overworked, so while your essay doesn’t have the words “As a great language model, I can’t say personally how I spent my summer holidays, but if I was, I… “you’ll probably be fine.
Something tells me that no teacher went into the profession just to test the operation of a large language model for a terrible fee; but maybe they can find a big language model that can check the work of their students for them, and no one has to teach anyone anything, and the models can just work it out among themselves.
Google wants you to know that its language model is so good that it can now answer medical exam style questions, so soon doctors will be able to just send you a text query with a list of symptoms and you can search Google for the symptoms you need. to get a doctor’s note for your condition. boss and your doctor can give you the diagnosis you need to get off work for a few days.
Even your family photos are not safe, because the picture you took with your children can be touched up and changed to make it perfect right on your phone, instead of being a regular snapshot in time of the person you love. the way he really is. However, it will totally help you become an Instagram influencer as only perfect kids are suitable for social media.
The list of potential use cases for these new AI tools from Google and others is truly endless, and they all have one thing in common: they add another distance between us and the people we had to interact with to get things done. and in the end it devalues us all in the process.