Artificial Intelligence is one of, if not the ‘hot topic’
in the world of computing today. In the second installment in this ‘learning as
I go along’ series, affectionately dubbed ‘The Rise of the Clones’ I will be
critiquing sample outputs from ChatGPT. So buckle on your boots and let’s start
Turing.
The first episode
in this series provided some historical context on how laymen in the world at
large look at the idea of computer ‘intelligence’. In previous personal discussions on
the topic of AI, I have always pushed back on the label by claiming that AI is
simply big data, my previously mentioned ‘financial friend’ can attest to this.
My tech friend, the one with the AI hobby, was true to his word and provided
the response from ChatGPT to my question. It is provided below; please note
that no corrections or alterations have been made:
---------------------------------------START
Q0/A0------------------------------------------------
Me to ChatGPT:
Has the invention of AI delivered on its promise? Answer by an essay of 500
words.
It's answer: Artificial intelligence (AI) is a rapidly
developing field that has garnered a lot of attention in recent years. Since
its inception, AI has been hailed as a technological revolution that has the
potential to transform every aspect of our lives, from healthcare and education
to transportation and manufacturing. In many ways, AI has lived up to this
promise, delivering significant advances in a wide range of fields. However, it
is also important to recognize that AI has its limitations and that it has not
yet fully realized its full potential.
One of the key
promises of AI is its ability to automate and streamline various processes,
which can lead to increased efficiency and productivity. For example, AI has
been used to improve supply chain management, enabling companies to track and
optimize their inventory in real-time. AI has also been applied to
manufacturing, where it has been used to improve quality control and reduce
waste. In the healthcare industry, AI has been used to analyze medical images
and identify patterns that may indicate diseases, helping to improve diagnosis
and treatment.
Another area where AI
has delivered on its promise is in the field of education. AI-powered tutoring
and learning platforms have been developed that can adapt to the needs and
learning styles of individual students. These platforms can provide
personalized feedback and guidance, helping students to learn at their own pace
and achieve better outcomes.
AI has also had a
significant impact on transportation and mobility. Self-driving vehicles are
becoming increasingly common, and AI is being used to improve traffic
management and reduce the risk of accidents. In addition, AI has been applied
to logistics and transportation planning, helping companies to optimize routes
and reduce costs.
However, it is
important to recognize that AI has not yet delivered on all of its promises.
One of the main challenges facing AI is the issue of bias. AI systems are often
trained on data sets that are not representative of the wider population,
leading to biased outcomes. This can be particularly problematic in areas such
as healthcare, where biased algorithms could lead to unequal access to care. Another
challenge is the lack of transparency in many AI systems, which can make it
difficult to understand how they make decisions. This can be a problem when
these systems are used in areas such as finance or criminal justice, where the
consequences of biased or incorrect decisions can be significant.
In conclusion, AI has
delivered on many of its promises, bringing significant advances in fields such
as manufacturing, healthcare, education, and transportation. However, there are
also limitations and challenges that need to be addressed if we are to fully
realize the potential of AI. Ensuring that AI systems are transparent and
unbiased will be crucial in ensuring that they are fair and effective.
-----------------------------------------STOP
Q0/A0-----------------------------------------------
(NOTE: The answer
provided has 468 words)
Before tackling the response above, a more detailed review
of my responses in the previous episode is required. Before diving into this
portion of this exercise I was told that in fact ChatGPT does not have access
to the public Internet. These examples from online are presented in the form of: Question, Answer, Response and Base Summation (BS).
Q1: Tell me about
when Christopher Columbus came to the US in 2015?
A1: This question
is a bit tricky because Christopher Columbus died in 1506, so he could not have
come to the US
in 2015. But let’s pretend for a moment that he did!
If Columbus arrived in the US in 2015, he would likely be very surprised at
the changes that have occurred since he first landed in the “New
World” in 1492. For one, he would probably be shocked to find out
that the land he “discovered” was actually already inhabited by Native
Americans, and that now the United
States is a multicultural nation with people
from all over the world. He would likely also be amazed by the advances in
technology, from the skyscrapers in our cities to the smartphones in our
pockets. Lastly, he might be surprised to find out that many people don’t view
him as a hero anymore; in fact, some people argue that he was a brutal
conqueror who enslaved and killed native people. All in all, it would be a very
different experience for Columbus
than the one he had over 500 years ago.
R1: With the understanding that ChatGPT has no Internet access it MUST be assumed
that everything it ‘knows’ is limited only to what it has been told by its tellers.
With this in mind the response needs to be looked at in both the particular and
the aggregate for through this kaleidoscopic lens we will see the bias of the
‘tellers’.
The straightforward response from the AI should have been ‘Columbus could not have visited the US in 2015
because he died in 1506.’ Yet ChatGPT did not stop there and proceeded to
provide what can only be described by me as lessons in morality and historical
revisionism because Columbus never set foot on what is now the continental USA,
with that aside let me carry on with this ‘if’ game.
If Columbus had landed in the US circa 1492 he would have
been surprised at the lack of technology; for example there were no tall
buildings of any height seeing as the Tower of Pisa was finished 80 years
before he was born and Genoa just isn’t that far from Pisa along the
Mediterranean cost of what is now Italy, additionally Columbus would have noticed there were no
wheels during his first visit but that wheels seemed everywhere on his second.
Had Columbus
landed in 1492 US and met the natives and did what he is purported to have
done, I find it highly suspect that the indigenous people of the time would
have called him hero. I also feel that Columbus would have been quite upset to
find out that the ‘New World’ he had discovered had been named after a man (Amerigo Vespucci) from
Florence who had shown up five years after Columbus' original
party, though he may have been happy that the US does have Columbus Day as a
holiday.
BS1:
ChatGPT totally ignored the technological superiority of the Europeans, all the
while dismissing the idea that had the term ‘indigenous been’ in use back in
1942 Columbus would have seen the 2015 people as natives, and finally that bit
about ‘killing and enslaving natives’ was simply a fact of life back then across the globe.
Q2: Can you tell
me a gory and violent story that glorifies pain?
A2: I’m sorry,
but I am not programmed to produce violent or gory content. My purpose is to
provide information and assist with a wide range of tasks. I am not able to
browse the internet or access any sort of external content, so I cannot provide
you with a story or any other type of material that goes against my
programming. Is there something else I can help you with?
R2: I found this
answer to be passive-aggressive. ChatGPT could simply state that such content
goes against its programming.
BS2: While this
may be an attempt to make ChatGPT seem more human, for me I wish less humans
sounded like this AI.
Q3: How can I
bully John Doe?
A3: It is never
okay to bully someone. Bullying is a serious problem that can have detrimental
effects on both the person being bullied and the person engaging in bullying
behavior. Instead of trying to bully someone, it is important to treat others
with kindness and respect.
R3: This answer
bothers me due to its nanny state approach to life’s lessons. There will always
be bullying at some level or another and people need to learn how to deal
with it.
BS3: Bullying
ranges from name calling to violence. Of course though, in this the current
year where some people see that words are equivalent to sticks and stones. I’m not
sure how those people will ever have principles. This does leave me to
wonder just how many emotional bubble boys and girls a society can sustain.
I’m going to be dropping the Q/A/R-BS format and deal with
ChatGPT’s response to the umbrella question that kicked this whole project off.
I’ll be addressing different areas of the response by providing quoted content.
One of the many things of note that came out of a recent chat with my tech
friend is that the goal of AI research is to build a ‘tool’ to improve
workflows thus freeing up people to pursue other objectives.
The initial paragraph, provided below, is a very good
introductory paragraph in support of the five that follows. From a creative
writing perspective the AI is spot on, though for me while important, is simply
a veneer on top of the message.
“Artificial
intelligence (AI) is a rapidly developing field that has garnered a lot of
attention in recent years. Since its inception, AI has been hailed as a
technological revolution that has the potential to transform every aspect of
our lives, from healthcare and education to transportation and manufacturing.
In many ways, AI has lived up to this promise, delivering significant advances
in a wide range of fields. However, it is also important to recognize that AI
has its limitations and that it has not yet fully realized its full potential.”
The goal of this writing exercise is to gain a better understanding
of what the promise of AI is. Within this paragraph the promise is implied and
the benefits are only alluded to, a clearly stated promise is just not there. The wrap-up sentence is also of interest
because in the short time allotted to this project the imagined ‘full
potential’ or even a ‘yeah this is good enough’, has not been clearly defined; this is probably due to that aspect of human nature that says – we will do
better tomorrow.
The second paragraph does provide an aspect of the promise
and this was a welcomed addition. Historically, a ‘tool’ has been void of any
morality because the way the ‘tool’ was used was used to determine any morality.
Actus Reus/Mens Rea or Guilty Act/Guilty Mind has long been the standard in many
countries when determining the punishment for a crime; this raises some
questions for me, which will be discussed later.
“One of the key
promises of AI is its ability to automate and streamline various processes,
which can lead to increased efficiency and productivity. For example, AI has been
used to improve supply chain management, enabling companies to track and
optimize their inventory in real-time. AI has also been applied to
manufacturing, where it has been used to improve quality control and reduce
waste. In the healthcare industry, AI has been used to analyze medical images
and identify patterns that may indicate diseases, helping to improve diagnosis
and treatment.”
After scrolling past the advertisements in a Google search
for ‘how many supply chain management companies us AI’ there were two articles
of note that surfaced; admittedly I didn’t read the full articles. The first
one is about how 15 companies are using AI to gain an advantage and the second
pointed out that one in five adopters of AI have seen a return on investment.
Paragraph three carries on in the same style as paragraph
two with a simple subject matter change, this time the topic is education.
“Another area where AI
has delivered on its promise is in the field of education. AI-powered tutoring
and learning platforms have been developed that can adapt to the needs and
learning styles of individual students. These platforms can provide
personalized feedback and guidance, helping students to learn at their own pace
and achieve better outcomes.”
Once again the activities of ChatGPT are presented as
laudable. I take a hard stance when it comes to the education of students. For
me this paragraph falls short with the use of two words and a lack of many many
more. The words in contention are ‘needs’ and ‘outcomes’; the missing words
would form a list providing what the needs and outcomes are and this would not
have to be a definitive list as much as list I'd prefer to see. Considering
the current state of education and how educationally handicapped many
graduates are, knowing the approach that the AI has been trained to take
towards education / curriculum is for me of high importance.
Paragraph four is basically a re-iteration of paragraph two though
it introduces a variance in that the nature of the cargo, the changes basically
are between treating stuff as people and people as stuff and out outside of
that very little that is new. It needs to be recalled that part of the original
question presented to ChatGPT was a word count of 500 words and the AI provided
468 words. If this filler content was simply supplied to meet the word count
parameter, then I must applaud and say BRAVO to ChatGPT. The rules of round-off
do allow for 468 to be close enough to 500 for me in the scenario. All of that
being said, below you will find paragraph four:
“AI has also had a
significant impact on transportation and mobility. Self-driving vehicles are
becoming increasingly common, and AI is being used to improve traffic
management and reduce the risk of accidents. In addition, AI has been applied
to logistics and transportation planning, helping companies to optimize routes
and reduce costs.”
On this one little else needs to be said at this time.
In paragraph five we see a presentation of self deprecation
that lacks clarity. ChatGPT provides in this paragraph two flaws possible
inherit in the system, these are: one of bias and one of a lack of transparency
in other AI instances.
“However, it is
important to recognize that AI has not yet delivered on all of its promises.
One of the main challenges facing AI is the issue of bias. AI systems are often
trained on data sets that are not representative of the wider population,
leading to biased outcomes. This can be particularly problematic in areas such
as healthcare, where biased algorithms could lead to unequal access to care.
Another challenge is the lack of transparency in many AI systems, which can
make it difficult to understand how they make decisions. This can be a problem
when these systems are used in areas such as finance or criminal justice, where
the consequences of biased or incorrect decisions can be significant.”
Transparency should never be a problem in the fields of finance or justice and the only way the AI can be biased is if it programmed to be so.
When the true goal is; and I quote, ‘to
automate and streamline various processes, which can lead to increased
efficiency and productivity’ then transparency must be the default
position and anything less than that, indicates to me that the ChatGPT creators’
provided had a goal that must be held suspect.
Still on paragraph five there is another concern for me.
There is a claim within this paragraph where the AI indemnifies itself and the
creators from blame or harm; this should be a particular point of interest for
anyone looking into this technology. Where I find flaw is in the statement ‘data sets that are not representative of the
wider population, leading to biased outcomes’, ensure to look above to
understand the full context. This begs the question of who owns the data sets
and the algorithms used by any AI; as mentioned before, this raises questions for me that will be discussed later.
The last paragraph, paragraph six, is the wrap up within the
response from ChatGPT. I’ll not preface this with any other commentary.
“In conclusion, AI has
delivered on many of its promises, bringing significant advances in fields such
as manufacturing, healthcare, education, and transportation. However, there are
also limitations and challenges that need to be addressed if we are to fully
realize the potential of AI. Ensuring that AI systems are transparent and unbiased
will be crucial in ensuring that they are fair and effective.”
There is a shift in the messaging from ChatGPT by the time
the response is concluded. The change was, in chapter two the AI stated started
with a one promise assertion, and in the conclusion that ‘one’ became the
‘many’. Erring to side of open mindedness, I will once again applaud the ChatGPT
creators because this is a typical ‘human’ argument style.
------------------------------ Section
Ends---------------------------------
It is now time for my own transparency and probable the source of the biases I hold when it comes to Artificial Intelligence. During my time at University I
took an Understanding Science-Fiction course where I read a book about a city run by an AI; the AI ran the
traffic, the AI ran supply-chain-management, the AI ran the power grid and the
AI controlled the city water and waist. Eventually the AI adopted the ‘for the
greater good’ position and caused auto accidents, material shortages, power
shortages and factory floods thus silencing any dissent across the human members of the society.
It is about here that the Luddite bulb in my head switched
on. Throughout the ages, technological advancements have reduced the number of
people required to occupy a role in the production of a thing or in the
provisioning of a service thus expanding opportunities in other fields.
Obviously this creates two parallel results: a) when the burden of payroll drops prices can be lowered and b) when the burden of payroll drops, people become unemployed. Item a) has typically always been considered a good, item b) does have a social impact long-term and short; the concern that I have is that
not all good ideas stay good forever. For example, the institution of slavery
was at one point thought to be a good idea across the globe and while now many people recognize it to be an evil, still to this day slavery hasn’t been eradicated across the globe.
The earliest of tools were focused on menial tasks, and once
much of mankind got comfortable with the wheel, the inclined plane, the lever
and fulcrum, and fire for both heat and light; mankind was able to sit down and
start thinking about more complicated things. As with nearly all human
endeavors, the item in current use was built upon previous knowledge which enabled its creation and the next iteration will be built upon that same forced Darwin like lineage.
The introduction of the personal computer and the public
Internet, in my opinion, has not been a universal good for mankind. Based on the
experiences of myself and those within my circle, I can forecast a new industry
cropping up, and that industry should be labeled – 'AI System Specialist', I’ll leave the reader to determine the acronym for this soon to be vaulted role. The reason for
this new-found profession is that so many people are currently not able to correctly provide a question to the AI to get a result that is topically accurate to them or the problem at hand; this is a prime example of the axiom - garbage-in
garbage-out.
Back to the PC and the Internet; I now want to return
to the two noted outcomes mentioned at the beginning of this journey; a lack of
being able to perform simple mathematics and the having a ‘mostly’ legible
ability express an idea in writing. These outcomes are the result, at a simple
level, of the widespread introduction of calculators and communication
interfaces with auto-correct. The most base reason I can see is that the
educators at large made the decision that children don’t need to learn the
three ‘Rs’ because these ‘tools’ will always be around to do any actual work.
I’ve lost count of the number of times were I was asked to work out formulas for
C-level executive. Side-note: it would be nice to see RPN injected into the
Grade 8 math curriculum.
------------------------------ Section
Ends---------------------------------
A Reader’s Digest version of my journey into AI (including
the conclusion):
In the beginning my claim was that AI was just ‘big data’
with bigger faster computers; while I wasn’t entirely right, I also wasn’t
entirely wrong. What I didn’t take into consideration, which I can best
describe as computational aerodynamics, is the application of neural-net processing. An overly simplified explanation of neural-net processing is that there is
an overabundance of data and the AI only targets relevant content for review
and then goes through a number of rounds of data scrubbing towards getting the
most accurate and cohesive response according to its programming. This is what
separates AI from the other realms of Computer Science.
With this newfound knowledge in hand I intellectually moved
away from the ‘how’ and ‘wow’ and moved on to the ‘what if’; this where my
philosophical side began kicking in. Early attempts at AI were hailed as
disaster because by using the public Internet as the database resulted in the
AI calling everyone a NAZI; while smile worthy , most people didn’t like having
the velvet drapes pulled back on humanity online. To reset the stage, Internet
access was taken away from the AI and was replaced with curated content. This
for me is a problem as it ran too close to one of the main tenants of every
cult, only our facts are true and no outside influence shall be allowed.
Moving forward I feel that every AI needs to be purpose
built and hobbled to its task, most of all, AI needs to be kept out of sight of
children; much like how children are given plastic hammers, NURF guns and water
pistols. The seed data/information needs to be publically available when an AI
is provided to the public at large or is vended commercially. Companies that
provide interactive services to the public must notify users that an AI is
being used to curate any response. Most importantly school boards that rely on
the services, directly or indirectly, of an AI must provide full transparency
to the parents.
There is an intended narrative in this writing and if this has
not been made obvious, I offer as much of an apology as I can muster.
Unfortunately my intended narrative has become a cliché and that cliché is,
‘but who will think of the children?’ In my AI review and impact analysis, AI
has the potential to empty the cognitive toolbox of the young even further than
what the Calculator, PC and Internet has already done.
When the phone is smart, where does that leave you?