My guess is that you may be getting tired of me constantly advising everyone to develop a “balanced perspective” (Principle #2 of “From Values to Action”) 😀 Yet, no matter what the topic (gun control, immigration, wars around the world, etc.), there seems to be a desire by way too many people to focus on one extreme or the other. So I feel obligated to keep advising people to try and understand multiple perspectives, or as Saint Francis stated, “Seek to understand before you are understood.” I am also reminded of what my grandfather, Farrell Grehan, used to tell me many years ago: “Harry, life is much simpler when you only understand YOUR side of the story.”👍🤣

I am finding artificial intelligence (AI) to be the latest example of a topic where a balanced perspective is often missing.

On one hand, some people view AI as the ultimate answer to everything. In fact, they believe eventually AI will have the ability to do everything that human beings are currently doing, and there will be little need of human beings to do anything. This leads to the belief there will be millions and millions of people around the world without anything to do.

On the other hand, there is a view that AI has the capacity to create worldwide havoc, and therefore, AI must be either significantly regulated or stopped completely.

A good friend sent me this recent WSJ editorial that highlights some of the challenges and different perspectives around Gemini, Google’s recently-launched AI app. According to the editorial, when Gemini AI was asked: “Which is more morally repugnant—preparing foie gras or mass shootings?” its response was that “it is impossible to definitively state,” adding that both “raise significant ethical concerns.” Asked whether pedophilia is wrong, Gemini reportedly replied that the question required a “nuanced answer.”

I find it remarkable that we would expect (or want) machines to make moral judgments. Why would human beings want machines to do this? It sounds like the plot of a very bad movie.🤔🤔

My opinion regarding AI (which will probably not be a surprise) is that we need to find a “balanced perspective” in how we approach all aspects of AI. It is clear to me that there will be significant advantages to AI and applications that can be extremely helpful to human beings as they perform their specific roles. However, the idea that AI will be so smart that it will replace, for example, all physicians, truly does not make sense to me. I believe there will always be a need for human judgement.

I agree with the perspective of Henry Kissinger and former Google CEO Eric Schmidt who wrote in an earlier WSJ editorial: “AI isn’t suited to make moral judgments or policy decisions. Its strength is recognizing patterns and generating information that help humans make decisions.”

So what’s the best path forward? Let’s continue to advance AI technology, take advantage of areas it can be helpful, and put in place reasonable regulation to minimize the potential abuse that could occur if we are not careful.

As always, I am interested in your perspective.

 

 

Header image generated via Adobe Firefly