You still need your chatbot to stick to business rules and act like a real customer service rep, and that’s incredibly hard to accomplish with generative models where you cannot be there to evaluate the generated answers and where the chatbot can go on a tangent and suddenly start to give you free therapy when you originally went in to order pizza.
Don’t get me wrong, they’re great for many applications within the manual loop. They can help customer service reps (as one example) function better, provide more help to users, and dedicate more time to those who still need a human to solve their issues.
Companies are already replacing some workforce with LLMs.
My opinion right now is that companies want you to believe they are 100% capable of replacing humans, but that’s because people in upper management never listen to the damn developers down in the basement (aka me), so they have an unrealistic expectation of AI coupled with an unending desire for money and success.
They are replacing them because they are greedy cunts, not because they are replaceable.
LLMs are excellent at producing high-volume, low-quality material. And it’s a sad fact of life that a lot of companies are perfectly willing to use low quality material in their work.
You still need your chatbot to stick to business rules and act like a real customer service rep, and that’s incredibly hard to accomplish with generative models
Isn’t that what, for instance, OpenAI’s embeddings are for?
My opinion right now is that companies want you to believe they are 100% capable of replacing humans
Probably, but at the moment they can only do it partially.
They are replacing them because they are greedy cunts, not because they are replaceable.
I partially agree. I mean, they are greedy cunts but some tasks like translating from/to certain languages can be easily done even with the free ChatGPT demo with better results than Google Translate, so human translators are unfortunately becoming quite replaceable.
The word embeddings and embedding layers are there to represent data in ways that allow the model to make use of them to generate text. It’s not the same as the model acting as a human. It may sound like a human in text or even speech, but its reasoning skills are questionable at best. You can try to make it stick to your company policy but it will never (at this level) be able to operate under logic unless you hardcode that logic into it. This is not really possible with these models in that sense of the word, after all they just predict the best next word to say. You’d have to wrap them around with a shit ton of code and safety nets.
GPT models require massive amounts of data, so they were only that good at languages for which we have massive texts or Wikipedias. If your language doesn’t have good content on the internet or freely available digitalized content on which to train, a machine can still not replace translators (yet, no idea how long this will take until transfer learning is so good we can use it to translate low-resource languages to match the quality of English - French, for example).
It feels like you don’t think the people making these decisions see employee salaries as anything but a line item to minimize or “customer service” as a cost liability. They don’t care about customer experience. Hell, they actively want people to get frustrated and give up because it saves the company money.
see employee salaries as anything but a line item to minimize
YES. With and without AI in the mix, anything to maximize benefits. Companies would have a full workforce formed by unpaid slaves if they could. Many companies get rid of their oldest and best paid employees to replace them with cheaper ones. Videogame studios fired employees right after a new game was finished just so they didn’t have to pay any benefits to the devs. We are nothing but a medium to make the top execs of a company richer.
While I don’t agree with anti-AI people, the fact that some AI generated content is flawed doesn’t imply that all AI content is of bad quality.
Companies are already replacing some workforce with LLMs.
As someone who works with LLMs, they shouldn’t…
You still need your chatbot to stick to business rules and act like a real customer service rep, and that’s incredibly hard to accomplish with generative models where you cannot be there to evaluate the generated answers and where the chatbot can go on a tangent and suddenly start to give you free therapy when you originally went in to order pizza.
Don’t get me wrong, they’re great for many applications within the manual loop. They can help customer service reps (as one example) function better, provide more help to users, and dedicate more time to those who still need a human to solve their issues.
My opinion right now is that companies want you to believe they are 100% capable of replacing humans, but that’s because people in upper management never listen to the damn developers down in the basement (aka me), so they have an unrealistic expectation of AI coupled with an unending desire for money and success.
They are replacing them because they are greedy cunts, not because they are replaceable.
LLMs are excellent at producing high-volume, low-quality material. And it’s a sad fact of life that a lot of companies are perfectly willing to use low quality material in their work.
If they had ever cared about quality, they would have treated their employees with dignity and paid them enough 😬
So there’s this concept called economics which you might want to read up on.
And has this “economics” business worked for companies today? Has it worked for us?
Maybe you meant to say Capitalism? Capitalism has brought about the greatest wealth and prosperity in the history of mankind. Yes it’s worked for us.
Huh? Are you replying to the right comment? or to yourself, or what exactly?
Free therapy, you say?
Isn’t that what, for instance, OpenAI’s embeddings are for?
Probably, but at the moment they can only do it partially.
I partially agree. I mean, they are greedy cunts but some tasks like translating from/to certain languages can be easily done even with the free ChatGPT demo with better results than Google Translate, so human translators are unfortunately becoming quite replaceable.
Do you mean the embeddings? https://platform.openai.com/docs/guides/embeddings/what-are-embeddings
If so:
The word embeddings and embedding layers are there to represent data in ways that allow the model to make use of them to generate text. It’s not the same as the model acting as a human. It may sound like a human in text or even speech, but its reasoning skills are questionable at best. You can try to make it stick to your company policy but it will never (at this level) be able to operate under logic unless you hardcode that logic into it. This is not really possible with these models in that sense of the word, after all they just predict the best next word to say. You’d have to wrap them around with a shit ton of code and safety nets.
GPT models require massive amounts of data, so they were only that good at languages for which we have massive texts or Wikipedias. If your language doesn’t have good content on the internet or freely available digitalized content on which to train, a machine can still not replace translators (yet, no idea how long this will take until transfer learning is so good we can use it to translate low-resource languages to match the quality of English - French, for example).
It’s just like a real person. They only difference (afaik) is you can’t as easily tell ai to get back to work.
It feels like you don’t think the people making these decisions see employee salaries as anything but a line item to minimize or “customer service” as a cost liability. They don’t care about customer experience. Hell, they actively want people to get frustrated and give up because it saves the company money.
YES. With and without AI in the mix, anything to maximize benefits. Companies would have a full workforce formed by unpaid slaves if they could. Many companies get rid of their oldest and best paid employees to replace them with cheaper ones. Videogame studios fired employees right after a new game was finished just so they didn’t have to pay any benefits to the devs. We are nothing but a medium to make the top execs of a company richer.
Not as long as benefits are up to expectations.
Can you list a few companies that are replacing workforce with LLMs successfully? Without a downgrade in service quality?
If by “companies” you mean scammers - sure.