Many days of the week, John Smith in his job as an administrative assistant to a college faculty member, will look up the professor’s availability to set up daily appointments. This is one of the more tedious small tasks he does that keeps Smith away from handling more important or rewarding assignments.
Some scientists are touting a solution to this saying Smith could be freed from these routine duties with the latest extension of artificial intelligence called ChatGPT. This Artificial Intelligence program rapidly runs through any number of databases to come up with written answers to queries or in Smith’s case, could expedite his time spent arranging meetings.
There are concerns, however. The answers and information that ChatGPT delivers aren’t always correct. The AI-generated writing is stilted, not mimicking the style or character of an individual’s writing. Some AI experts say that chat-generated information could be used to spread misinformation in the wrong hands or provide the incorrect information to students that may be using it in classrooms to assist with math problems, writing, or other assignments that are a part of lesson plans.
And then there is the most crucial question: if you are a white-collar worker, will ChatGPT be helpful in your job or replace you entirely?
John Smith is a mythical construction, but as Moshe Vardi, a Rice University and George Distinguished Service in Computational Engineering professor puts it, his situation could apply to any number of workers in similar fields. Vardi and other AI experts are weighing the pros, cons and ethics of adding this latest technology into the workforce.
For instance: while ChatGPT might be a big help with scheduling, a business might decide it needs fewer assistants – a definite downside for employees.
“It may be that businesses and companies decide that one assistant can now cover multiple members of that company or business if they have the technology now handling the routine tasks, leaving more time for these assistants to cater to the more time-consuming or individualized tasks,” said Vardi.
Vardi said this technology may also cause the assistants’ jobs – and others – to evolve. Assistants might be required to go back to school or get additional training if they adopt new, more complicated duties.
Vardi likens ChatGPT’s ability to disrupt the availability of white-collar work to the automation of machine work – when it broke into the technology sector and decreased the number of working class and manufacturing jobs.
“Usually for professionals, technology has been good for us, and in the past has only affected working class people,” Vardi said. “But with ChatGPT, automation is getting to professionals and threatening the jobs of white-collar workers and there’s an anxiety or fear that their jobs may be eliminated.”
Peter Stone, a professor of computer science and director of robotics at UT Austin and executive director of Sony AI, America, does not see these jobs under threat of elimination, but can see these industries transitioning because of the emergence of ChatGPT.
“I think more often technologies like this don’t outright replace jobs, but rather transform them,” Stone said. “We are still trying to figure out the ways that they are going to change jobs and even the economic environment.”
Stone said to get the most out of this technology, industry officials should concentrate their efforts on figuring out how it can be used to determine what people do well and what the AI components of ChatGPT do well.
Kathy Hirsh-Pasek, a senior fellow at the Brookings Institution – a nonprofit public policy research group – said the introduction of ChatGPT could change employee training and skills in the future workforce.
Hirsh-Pasek refers to this transition as the “age of creativity,” where productivity may no longer be the most valued trait in an employee – it might be their ability to be adaptable.
“It is possible in the world of ChatGPT that skills like originality and novelty will be the currency of the time as opposed to skills like synthesizing, data-crunching and efficiency, and if that happens what is the consequence?” she said.
Hirsh-Pasek agrees with both Vardi and Stone, as she said it is inevitable that some aspects of current jobs may become obsolete as ChatGPT is already performing tasks that employees now do.
In addition to administrative assistants, she said this technology could pose an issue for paralegals, physicians, real estate agents and journalists – as it can write baseline legal briefs, fill out real estate documents, write sample articles and could be used for initial diagnoses on online medical platforms.
Hirsh-Pasek said this isn’t happening yet; however, because ChatGPT is not perfected. Yes, it can complete all these smaller jobs, but the information in the articles and other documents or diagnoses is not always accurate.
And if this does occur, it does not mean certain aspects of these jobs won’t remain. Regardless of the rise in this technology and more advanced versions, no form of AI could entirely uproot the need for real employees – especially in fields like health care, she said.
“Human knowledge, understanding of social cues and all those skills will be preserved, it’s just the routine tasks that are doomed,” Hirsh-Pasek said.
Hirsh-Pasek also predicts that ChatGPT could create the need for collaboration across industries, especially when considering the ways to implement it in the workforce as this technology continues to develop.
Houston Community College colleagues, Samir Saber, Dr. Fidelis Ngang and Dr. Raymond G Brown, who are respectively the Dean of the Digital & Information Technology Center of Excellence, Department Chair and Program Coordinator of the college’s artificial intelligence program, said that ChatGPT is also set to create new jobs for the future workforce.
“New jobs will appear that may even rearrange workflows; this technology will open up so many more doors than it will close,” Brown said. “We can’t even imagine right now, what some of those will be.”
Saber said the World Economic Forum projected that 85 million jobs will be displaced but 97 million new roles will be created when ChatGPT enters into the market. He thinks it really only threatens areas like customer service call centers or chat-messaging responders.
“I think at the end of the day, when we want to talk about jobs, this technology is an enhancer. It will allow us to know what we do better and what technology does better,” Saber said. “And the jobs that it will drastically impact are the tasks that humans don’t want to or like to do anyway.”
Ngang said as of right now, most industries are cautious about adding ChatGPT at their workplace.
“Certain things won’t go away, so some of the fear is unnecessary, I understand where it comes from – take Detroit, Michigan for an example – it used to be the center of the universe, maybe even five years ago and because manufacturing got automated, jobs were lost and families were affected,” Ngang said. “But as far as human evolution, why would we have the brain capacity to create and develop this technology if it wasn’t part of our journey?”
And similar to how Hirsh-Pasek said there could be new ways of training employees, there could be new fields of study that assist in this training, Brown said.
One that is already developing is referred to as “prompt-engineering,” Brown said – a look into how best to design the initial information presented to ChatGPT to produce more optimal results from the technology.
ChatGPT is likely to also transform the earliest level of education of the future workforce in high schools and colleges around the country.
According to Saber, he sees this as a tool that can be used inside the classroom to help in the completion of assignments and information collecting similar to Google and other internet search engines – once fully developed.
However, there are some concerns ChatGPT could lead to more cases of plagiarism, making it easier for students to take the information directly produced by the AI system and use it instead of doing the work on their own.
“It is impossible to avoid plagiarism altogether, because at the end of the day the way the model is designed, you can tell it to avoid plagiarism but that doesn’t work,” Saber said. “You can use it to get information that you need and then reword it with your own information, even if it does tell you the references – which in itself, is low level plagiarism; so, it’s very difficult for teachers to fight that and what we are saying is that teachers are going to have to get creative with how the teachers assess their students.”
Other than exploring it in classroom discussions and panels, Houston Community College has not implemented ChatGPT as a tool in their curriculum, the colleagues said.
Hirsh-Pasek, who is also a professor, allowed her students to recently use it in completing their midterm essays; she said most of those students gave up using the technology and wrote without using it.
“They said they stopped using it because the answers were ‘stupid’ and that I wouldn’t have liked them,” she said. “They said they didn’t state a thesis or take a position and that if they were to turn in the writing, they told me if they were to turn them in, they would’ve earned somewhere around a C+.”
Another fault of ChatGPT is inconsistently producing accurate information, which would be exacerbated in the classroom as students may turn to rely on it for research.
Vardi said that the advent of Google and Google Scholar made the need to search for information in libraries unnecessary, and this could be the case for ChatGPT – but that currently due to its unpredictable accuracy – it is not quite there yet.
Vardi said AI experts are attempting to further understand the negative and positive aspects of ChatGPT, prior to making more advanced versions available to the public.
He was one of the signatories of a letter that was published by the Future of Life Institute that calls for a pause in development of AI systems that are stronger than the current version of ChatGPT – GPT4 – and calls on experts of these technologies to jointly establish a set of protocols for use and development of these tools.
“The issue that the letter wants to address is not the issue within this technology itself, but with our social, economic and political environment for innovation,” Vardi said. “People in Silicon Valley and other experts are calling for a pause because they tend to let technology do its thing in the marketplace, but with this technology they don't want to let it run amuck without taking precautions or thinking of the possible consequences.”
According to Vardi, similar to the internet and social media, no expert can predict every way that ChatGPT will affect the workforce, education or any other sector of society. It won’t be until 10 or 15 years down the road, until anyone will understand what this technology has the capability to do, Vardi said.
“Scientists don’t always have the most imagination to know how things can go wrong. Yes, this technology is good for the world, but it’s very hard to acknowledge how it will have an adverse impact until the data becomes clearer later down the line,” he said.