Artificial Intelligence Sermon Generator

Started by Rev. Edward Engelbrecht, March 17, 2023, 09:50:55 PM

Previous topic - Next topic

If a Lutheran publisher created an Artificial Intelligence sermon generator,

I would use it to help create sermons.
1 (5.9%)
I would never use it.
11 (64.7%)
I would consider using it, depending on its performance.
3 (17.6%)
I don't prepare sermons for Lutheran congregations but I find the idea interesting.
2 (11.8%)
I don't prepare sermons for Lutheran congregations and I find the idea uninteresting.
0 (0%)

Total Members Voted: 17

peter_speckhard

Quote from: Brian Stoffregen on March 23, 2023, 02:16:54 PM
Quote from: peter_speckhard on March 23, 2023, 01:59:52 PM
But I think we overestimate the benefits of technology and underestimate the liabilities. Has social media made people happier? Whose life will be improved by AI composition capabilities? Can we honestly say we anticipate sermons being better ten years from now than they were before the invention of the internet?


I would hope so. The advent of TV and people seeing and hearing Walter Cronkite among other newscasters required preachers to get better. It was stated in my homiletics class that someone who received an A in years past would get a C in today's world. ("Today" was in the 1970s when I was in seminary.)

I would hope that with the advent of all the Bible helps available for computers and on the internet, preachers could quicker and more efficiently delve into the biblical passage they were preaching on.

QuoteChesterton said that automobiles were fine as long as they remained exceptional, for a ride in the country or something, but were a disaster once life had to be designed around the assumption of their availability. He may or may not have been right about that overall. But it is certainly true that when cars were introduced, they were considered simply faster horse-drawn carriages. By the one-to-one comparison, there is no question that a car is more efficient and less maintenance than a horse and carriage. But the overall impact of the invention of cars on society goes way beyond that comparison in ways impossible to anticipate. So it doesn't really matter if Chesterton or Dan or you or I determine to use cars only in certain ways. We have to live in a world with cars used the way most people use them, and getting most people on the same page is hard. Even the Amish can get run over by cars. Someone using AI technology responsibly as a tool in his own life will still have to live in a world transformed by AI in ways impossible to anticipate fully. When we debate the merits of AI (or any new technology or potential new technology) we have to look both at the close comparison of applications for any individual and at the overall impact on society. 

And yet, there are thousands of people to day who get around their cities without owning an automobile. Neither of our 40-something sons own automobiles. They walk, use mass transit, or ride their electric bicycles.

QuoteTechnology as a servant rather than a master has to serve the cause of human happiness. Technology is advancing rapidly. Is human happiness advancing rapidly?

From the "happiness" studies of countries, it wouldn't seem that technology has much to do with making the happiest people happy.
https://www.cnbc.com/2023/03/21/top-10-happiest-countries-in-the-world-2023.html
Tough to measure, but I think fifty randomly selected sermon from fifty years ago would be just as good as fifty randomly selected sermons today, and that future sermons will be no better.

Yes, my point was that individuals can make do without cars, but they can't choose to live in cities not designed around cars. The societal acceptance, not the individual acceptance, makes the larger and often unforeseeable difference. And, like the Amish, they can get hit by cars whether they use cars or not.

And I agree, technological progress or what passes for social progress often (not always) fails the test of whether it really makes people happier.

Dan Fienen

There are video clips circulating on Twitter of former Pres. Trump running from the police as they attempt to arrest him. https://www.independent.co.uk/news/world/americas/us-politics/trump-arrest-deepfakes-ai-fake-b2306413.html Which is interesting of course since he has yet to be indicted much less arrested in reality. Like it or not (Not!!) we live in an era of Deep Fake.


I remember back in the Obama presidency over a decade ago an item on internet about how the "Liberal media" protected Pres. Obama. The video clip show was that of Pres. Obama doing a back flip and otherwise showing his excitement in an undignified way over some occurrence with the warning that you would never see that video on the mainstream networks since they were protecting Pres. Obama. As a matter of fact, I had seen that very clip on a mainstream network several days earlier. Only it wasn't on the news. It was on Jay Leno's "Tonight Show." He occasionally had such clips constructed as a joke. It was good enough to fool a troll looking for such material. Image what they can do a decade later!
Pr. Daniel Fienen
LCMS

Rev. Edward Engelbrecht

I have print and digital concordances for looking up "lion," which I did. I have a Bible encyclopedia for looking up "lion." A means of organizing what I see there into categories would help and save time. I could search sermons about lions on the internet and get heaven knows what with no effective means to sort it.

Perhaps sermons supported by AI would not be better than those from 50 years ago. But they might develop more efficiently. A preacher with such help might present greater diversity in style and content to better engage hearers. Those are good things, I think. There's a lot of stale preaching out there. It's easy to get into a rut.

My default is lectionary, expository, outline from one key verse, and write. What if technology could take my expository sermon and turn it into a different style so i could select between them? Saying,  "Well, pastor,  you just need to put in more hours and struggle more" won't be a helpful solution.
I serve as administrator for The Lutheran Study Bible group on Facebook.

Brian Stoffregen

Quote from: peter_speckhard on March 23, 2023, 02:41:01 PM
Tough to measure, but I think fifty randomly selected sermon from fifty years ago would be just as good as fifty randomly selected sermons today, and that future sermons will be no better.

50 years is not a long time in church history. There are folks in this forum who have 50-year-old sermons. What makes a sermon "good" is more than just the words that are written down. (Many preachers don't write down their words.) According to one statistic I read, 65% of what is heard in a conversation comes from non-verbal cues. I suspect that other preachers, like me, have had someone comment about something they heard in a sermon which I was sure I didn't say. What TV has brought to preaching is more about the quality of delivery and sound than the content.

QuoteYes, my point was that individuals can make do without cars, but they can't choose to live in cities not designed around cars. The societal acceptance, not the individual acceptance, makes the larger and often unforeseeable difference. And, like the Amish, they can get hit by cars whether they use cars or not.

There are old cities with streets that weren't designed for cars. Cars are banned from Mackinac Island in Michigan.

For those whose individual convictions are counter to the society's acceptance, will appear to be like the Amish.

QuoteAnd I agree, technological progress or what passes for social progress often (not always) fails the test of whether it really makes people happier.

It's a miracle! Peter and I agree on something.
I flunked retirement. Serving as a part-time interim in Ferndale, WA.

Sam Sessa

I've been experimenting with GPT-4 in the development of my sermons. A few weeks ago, I wanted to modernize the parable of the prodigal. I wanted to use a structure of telling the story followed by interruptions for exploration, explanation and law/gospel. I could have sat down and written out the story myself. Instead I had the AI write it from the perspective of two coworkers gossiping about family drama in the Suite.

What it came up with was useful. I then took that, rewrote different aspects of it, and then incorporated it into my sermon.

I've been closely following the AI development for the last nine months. I find all of this deeply fascinating.

peter_speckhard

Quote from: Sam Sessa on March 25, 2023, 11:09:08 PM
I've been experimenting with GPT-4 in the development of my sermons. A few weeks ago, I wanted to modernize the parable of the prodigal. I wanted to use a structure of telling the story followed by interruptions for exploration, explanation and law/gospel. I could have sat down and written out the story myself. Instead I had the AI write it from the perspective of two coworkers gossiping about family drama in the Suite.

What it came up with was useful. I then took that, rewrote different aspects of it, and then incorporated it into my sermon.

I've been closely following the AI development for the last nine months. I find all of this deeply fascinating.
"In the Suite"? I don't think I've ever heard that phrase. Maybe you're picking up on AI-speak with all your experimentation.

Sam Sessa

Sorry. I used ALPB's spellcheck and didn't double check its results. "In the C-Suite."

Charles Austin


From the NYT today
Guest essay on AI
By Yuval Harari. Tristan Harris and Aza Raskin
.  Imagine that as you are boarding an airplane, half the engineers who built it tell you there is a 10 percent chance the plane will crash, killing you and everyone else on it. Would you still board?
   In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems. Technology companies building today's large language models are caught in a race to put all of humanity on that plane.
   Drug companies cannot sell people new medicines without first subjecting their products to rigorous safety checks. Biotech labs cannot release new viruses into the public sphere in order to impress shareholders with their wizardry. Likewise, A.I. systems with the power of GPT-4 and beyond should not be entangled with the lives of billions of people at a pace faster than cultures can safely absorb them. A race to dominate the market should not set the speed of deploying humanity's most consequential technology. We should move at whatever speed enables us to get this right.
   The specter of A.I. has haunted humanity since the mid-20th century, yet until recently it has remained a distant prospect, something that belongs in sci-fi more than in serious scientific and political debates. It is difficult for our human minds to grasp the new capabilities of GPT-4 and similar tools, and it is even harder to grasp the exponential speed at which these tools are developing more advanced and powerful capabilities. But most of the key skills boil down to one thing: the ability to manipulate and generate language, whether with words, sounds or images.
of human culture.
In the beginning was the word. Language is the operating system of human culture. From language emerges myth and law, gods and money, art and science, friendships and nations and computer code. A.I.'s new mastery of language means it can now hack and manipulate the operating system of civilization. By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers.
   What would it mean for humans to live in a world where a large percentage of stories, melodies, images, laws, policies and tools are shaped by nonhuman intelligence, which knows how to exploit with superhuman efficiency the weaknesses, biases and addictions of the human mind — while knowing how to form intimate relationships with human beings? In games like chess, no human can hope to beat a computer. What happens when the same thing occurs in art, politics or religion?
   A.I. could rapidly eat the whole of human culture — everything we have produced over thousands of years — digest it and begin to gush out a flood of new cultural artifacts. Not just school essays but also political speeches, ideological manifestos, holy books for new cults. By 2028, the U.S. presidential race might no longer be run by humans.
   Humans often don't have direct access to reality. We are cocooned by culture, experiencing reality through a cultural prism. Our political views are shaped by the reports of journalists and the anecdotes of friends. Our sexual preferences are tweaked by art and religion. That cultural cocoon has hitherto been woven by other humans. What will it be like to experience reality through a prism produced by nonhuman intelligence?
For thousands of years, we humans have lived inside the dreams of other humans. We have worshiped gods, pursued ideals of beauty and dedicated our lives to causes that originated in the imagination of some prophet, poet or politician. Soon we will also find ourselves living inside the hallucinations of nonhuman intelligence
   The "Terminator" franchise depicted robots running in the streets and shooting people. "The Matrix" assumed that to gain total control of human society, A.I. would have to first gain physical control of our brains and hook them directly to a computer network. However, simply by gaining mastery of language, A.I. would have all it needs to contain us in a Matrix-like world of illusions, without shooting anyone or implanting any chips in our brains. If any shooting is necessary, A.I. could make humans pull the trigger, just by telling us the right story.
   The specter of being trapped in a world of illusions has haunted humankind much longer than the specter of A.I. Soon we will finally come face to face with Descartes's demon, with Plato's cave, with the Buddhist Maya. A curtain of illusions could descend over the whole of humanity, and we might never again be able to tear that curtain away — or even realize it is there.
Social media was the first contact between A.I. and humanity, and humanity lost. First contact has given us the bitter taste of things to come. In social media, primitive A.I. was used not to create content but to curate user-generated content. The A.I. behind our news feeds is still choosing which words, sounds and images reach our retinas and eardrums, based on selecting those that will get the most virality, the most reaction and the most engagement.
   While very primitive, the A.I. behind social media was sufficient to create a curtain of illusions that increased societal polarization, undermined our mental health and unraveled democracy. Millions of people have confused these illusions with reality. The United States has the best information technology in history, yet U.S. citizens can no longer agree on who won elections. Though everyone is by now aware of the downside of social media, it hasn't been addressed because too many of our social, economic and political institutions have become entangled with it.
Large language models are our second contact with A.I. We cannot afford to lose again. But on what basis should we believe humanity is capable of aligning these new forms of A.I. to our benefit? If we continue with business as usual, the new A.I. capacities will again be used to gain profit and power, even if it inadvertently destroys the foundations of our society.
   A.I. indeed has the potential to help us defeat cancer, discover lifesaving drugs and invent solutions for our climate and energy crises. There are innumerable other benefits we cannot begin to imagine. But it doesn't matter how high the skyscraper of benefits A.I. assembles if the foundation collapses.
   The time to reckon with A.I. is before our politics, our economy and our daily life become dependent on it. Democracy is a conversation, conversation relies on language, and when language itself is hacked, the conversation breaks down, and democracy becomes untenable. If we wait for the chaos to ensue, it will be too late to remedy it.
But there's a question that may linger in our minds: If we don't go as fast as possible, won't the West risk losing to China? No. The deployment and entanglement of uncontrolled A.I. into society, unleashing godlike powers decoupled from responsibility, could be the very reason the West loses to China.
   We can still choose which future we want with A.I. When godlike powers are matched with commensurate responsibility and control, we can realize the benefits that A.I. promises.
   We have summoned an alien intelligence. We don't know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization. We call upon world leaders to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for an A.I. world and to learn to master A.I. before it masters us.
-0-
Iowa-born. ELCA pastor, ordained 1967. Former journalist. Retired in Minneapolis. English major. Elitist snob? Probably.

D. Engebretson

This article, in its own way, makes an argument for the value of the kind of education we discussed earlier on this forum: a liberal arts education.  Such an education, if carried out correctly, taught us to explore language and and work to understand it; to be critical of what we heard, not in a purely negative way, but in a way that seeks to comprehend and not just accept.  Unfortunately, I fear many no longer receive this education, even in so-called liberal arts programs. 

It is also makes an argument for our preachers and churches to engage the Word and continue to proclaim it independent of the online world.  Biblically-based sermons, well-crafted, have never been so needed and necessary.  We cannot simply allow the online world and AI to teach our people. It is interesting that just this morning I was listening to a podcast that reminded us of the "false delusions" of the Last Days of which Paul talks about in 2 Thessalonians.  It is a sign not only of our times, but a sign of the end.  Paul talks about this deception as "wicked" and says that those captivated by it "are perishing because they refused to love the truth and so be saved."  At the risk of taking the next logical step, I see one of the dangers of AI as being part of a potential "wicked deception".

The big joke for a long time is that "it must be true because I found it on the internet".  We cannot let this world become the end-all of knowledge, especially spiritual and biblical knowledge.  More than ever people need to be encouraged to be in the Word directly, with a faithful pastor teaching and guiding them.  (Reference: 2 Thessalonians 2)
Pastor Don Engebretson
St. Peter Lutheran Church of Polar (Antigo) WI

peter_speckhard

Quote from: Charles Austin on March 27, 2023, 09:41:14 AM

From the NYT today
Guest essay on AI
By Yuval Harari. Tristan Harris and Aza Raskin
.  Imagine that as you are boarding an airplane, half the engineers who built it tell you there is a 10 percent chance the plane will crash, killing you and everyone else on it. Would you still board?
   In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems. Technology companies building today's large language models are caught in a race to put all of humanity on that plane.
   Drug companies cannot sell people new medicines without first subjecting their products to rigorous safety checks. Biotech labs cannot release new viruses into the public sphere in order to impress shareholders with their wizardry. Likewise, A.I. systems with the power of GPT-4 and beyond should not be entangled with the lives of billions of people at a pace faster than cultures can safely absorb them. A race to dominate the market should not set the speed of deploying humanity's most consequential technology. We should move at whatever speed enables us to get this right.
   The specter of A.I. has haunted humanity since the mid-20th century, yet until recently it has remained a distant prospect, something that belongs in sci-fi more than in serious scientific and political debates. It is difficult for our human minds to grasp the new capabilities of GPT-4 and similar tools, and it is even harder to grasp the exponential speed at which these tools are developing more advanced and powerful capabilities. But most of the key skills boil down to one thing: the ability to manipulate and generate language, whether with words, sounds or images.
of human culture.
In the beginning was the word. Language is the operating system of human culture. From language emerges myth and law, gods and money, art and science, friendships and nations and computer code. A.I.'s new mastery of language means it can now hack and manipulate the operating system of civilization. By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers.
   What would it mean for humans to live in a world where a large percentage of stories, melodies, images, laws, policies and tools are shaped by nonhuman intelligence, which knows how to exploit with superhuman efficiency the weaknesses, biases and addictions of the human mind — while knowing how to form intimate relationships with human beings? In games like chess, no human can hope to beat a computer. What happens when the same thing occurs in art, politics or religion?
   A.I. could rapidly eat the whole of human culture — everything we have produced over thousands of years — digest it and begin to gush out a flood of new cultural artifacts. Not just school essays but also political speeches, ideological manifestos, holy books for new cults. By 2028, the U.S. presidential race might no longer be run by humans.
   Humans often don't have direct access to reality. We are cocooned by culture, experiencing reality through a cultural prism. Our political views are shaped by the reports of journalists and the anecdotes of friends. Our sexual preferences are tweaked by art and religion. That cultural cocoon has hitherto been woven by other humans. What will it be like to experience reality through a prism produced by nonhuman intelligence?
For thousands of years, we humans have lived inside the dreams of other humans. We have worshiped gods, pursued ideals of beauty and dedicated our lives to causes that originated in the imagination of some prophet, poet or politician. Soon we will also find ourselves living inside the hallucinations of nonhuman intelligence
   The "Terminator" franchise depicted robots running in the streets and shooting people. "The Matrix" assumed that to gain total control of human society, A.I. would have to first gain physical control of our brains and hook them directly to a computer network. However, simply by gaining mastery of language, A.I. would have all it needs to contain us in a Matrix-like world of illusions, without shooting anyone or implanting any chips in our brains. If any shooting is necessary, A.I. could make humans pull the trigger, just by telling us the right story.
   The specter of being trapped in a world of illusions has haunted humankind much longer than the specter of A.I. Soon we will finally come face to face with Descartes's demon, with Plato's cave, with the Buddhist Maya. A curtain of illusions could descend over the whole of humanity, and we might never again be able to tear that curtain away — or even realize it is there.
Social media was the first contact between A.I. and humanity, and humanity lost. First contact has given us the bitter taste of things to come. In social media, primitive A.I. was used not to create content but to curate user-generated content. The A.I. behind our news feeds is still choosing which words, sounds and images reach our retinas and eardrums, based on selecting those that will get the most virality, the most reaction and the most engagement.
   While very primitive, the A.I. behind social media was sufficient to create a curtain of illusions that increased societal polarization, undermined our mental health and unraveled democracy. Millions of people have confused these illusions with reality. The United States has the best information technology in history, yet U.S. citizens can no longer agree on who won elections. Though everyone is by now aware of the downside of social media, it hasn't been addressed because too many of our social, economic and political institutions have become entangled with it.
Large language models are our second contact with A.I. We cannot afford to lose again. But on what basis should we believe humanity is capable of aligning these new forms of A.I. to our benefit? If we continue with business as usual, the new A.I. capacities will again be used to gain profit and power, even if it inadvertently destroys the foundations of our society.
   A.I. indeed has the potential to help us defeat cancer, discover lifesaving drugs and invent solutions for our climate and energy crises. There are innumerable other benefits we cannot begin to imagine. But it doesn't matter how high the skyscraper of benefits A.I. assembles if the foundation collapses.
   The time to reckon with A.I. is before our politics, our economy and our daily life become dependent on it. Democracy is a conversation, conversation relies on language, and when language itself is hacked, the conversation breaks down, and democracy becomes untenable. If we wait for the chaos to ensue, it will be too late to remedy it.
But there's a question that may linger in our minds: If we don't go as fast as possible, won't the West risk losing to China? No. The deployment and entanglement of uncontrolled A.I. into society, unleashing godlike powers decoupled from responsibility, could be the very reason the West loses to China.
   We can still choose which future we want with A.I. When godlike powers are matched with commensurate responsibility and control, we can realize the benefits that A.I. promises.
   We have summoned an alien intelligence. We don't know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization. We call upon world leaders to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for an A.I. world and to learn to master A.I. before it masters us.
-0-
All solid points, but I think there are some good defenses concerning the liberals arts and culture, which is that what we now have was created before AI came on the scene. Not just Beethoven's 9th and Hamlet, but old pop music, children's books, and everyday stuff. As long as we commit to being formed by the past rather than a vision of the future, AI will have limited power. But if we stop understanding old works we'll have no way making news ones better than AI and AI will gain near total power to shape the culture.

The advent of mainstream AI makes it more important than ever that we a) commit to teaching the classics so that people's understanding can be shaped by the best of human culture, so that AI can augment but never replace. Even more importantly, we must b) utterly reject "updating" of old works in deference to delicate modern sensibilities. For exmaple, new editions of Agatha Christie's novels are changing the text, mostly due to ethnic terms that have become offensive. 

   https://deadline.com/2023/03/agatha-christie-hercule-poirot-miss-marple-classic-mysteries-rewritten-modern-readers-1235310224/

Changing the old text is a massive mistake and makes the danger of AI much worse. It seems innocuous enough-- why not just make an offensive 20th description of people more palatable to modern audiences? But it is not innocuous at all, it is poisonous. Poirot and Miss Marple in the books, so to speak. Nothing AI can do about it. But AI can recommend helpful, slight modifications and improvements and take over from there. Far better to develop a thick enough skin to learn to appreciate old books and music on an as-is basis than to retroactively improve them. It is a bad idea anyway, but a terrible idea now that AI can help.


Dan Fienen

#55
Back in 1942, Isaac Asimov published a short story, "Runaround," in which he set his Three Laws of Robotics:


Quote
[size=78%][/size]First Law[size=78%][/size][size=78%] [/size]A robot may not injure a human being or, through inaction, allow a human being to come to harm.[size=78%][/size]
[size=78%][/color][/size]
[/color]Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

[/color]

At the time, robots were a science fictional concept, but have since become a integral part of our world. But they have developed differently than Asimov or other futurists envisioned. Rather than general purpose machines that could learn to do a wide variety of tasks, they have developed as purpose built machines that while adaptable are not nearly as widely adaptable as Asimov's robots and with far less personality.


What should, perhaps, concern us is that currently A.I. is being developed without a hint that somehow something like the Three Laws of Robotics being implemented. Ad A.I. develops we need to see about incorporating something like the Three Laws in the mix.


It is on that personality that current concerns over A.I. are centered. The field is haunted by the 1818 novel by Mary Shelley Frankenstein and the fear the man's creation will overshadow and destroy man.


As far as I know, as clever as current A.I. has become, it has not yet achieved true sentience. It is not self directed and able to decide for itself what it wants to do but rather does what it is designed to do. I really do not think that we are close to a Skynet or Matrix scenario where the machines decide for themselves to take over. We do not know where sentience comes from, how a machine could begin to be really self-aware. It is often assumed that self-awareness is an inevitable byproduct of complexity, that the reason why humans are self-aware is that our brains are so complex, much more complex than the most advanced computers yet in existence. Once computers become complex enough, a point often described as the Singularity, they will inevitably become self-aware and may well decide to take over. It is a common sci-fi trope.


This depends in part on the assumptions that humans are merely extremely complex biomechanical machines, differing from our "smart" machines in being organically rather than mechanically based and the level of complexity involved. This is an anthropology which Christianity would dispute.


That does not mean that there are no dangers associated with the current developments in A.I., dangers which the article Pr. Austin posted. But the dangers are more familiar than exotic A.I., Terminator/Matrix, threats. It is the dangers associated with people doing foolish and dangerous things with invented tools that allow people to create ever increasing amounts of havoc more easily and efficiently.


Ever since WW II people have had the means to effectively end all human life on planet earth. We can create with the push of a few buttons a human extinction level event. A.I., in the manner of the movie War Games, can make it easier to do, and more frightening easier to do accidentally, but it is still a human threat to humanity.


As we consider the dangers that recent developments in A.I. pose, the threat of the machines becoming self-aware, self-willed and going rogue is something of a red herring. The real problem is more commonplace but really no less frightening. The real danger is what havoc people will let lose on purpose or even accidentally with tools that they do not completely understand but which can inflict damage ever more easily and with greater effect.


Joseph Stalin, Mao Zedong, and Adolf Hitler did not invent the police state or the idea of controlling people by manipulating the flow of information and propaganda, but they used the latest development in information technology to do so more effectively. It is troubling to think what people today could do along those lines with the information technology being developed.



Pr. Daniel Fienen
LCMS

Rev. Edward Engelbrecht

#56
I'll interact with the Times article, which is cautionary but ultimately encourages use of A.I.

700 Academics. What the academics fear is irresponsible use of the technology. Remember,  the A stands for "artificial, " not "autonomous. " Human decisions manage the technology.
Checks and Balances. Agreed.
New Language Ability.  Summary of the latest.
A.I. Hacking Language. It will do what it is programmed to do.
Questions. Sure, and more questions will arise.
A. I. Eating Up Culture.  Checks and balances needed.
A.I. Influencing Culture (x3). Checks and balances.
Social Media (x2). I disagree here. Some were harmed terribly by Social media yet millions use it daily (e.g., this platform). Again,  Checks and balances are needed. Children should stay off social media until ready.
Large Language. Checks and balances.
Reckon with A.I. Speculation about benefits.
China. New arms/economy race, stay responsible.
Benefits of A.I. Optimism here.
Update Institution.  Change is definitely coming.
I serve as administrator for The Lutheran Study Bible group on Facebook.

James S. Rustad

Quote from: Charles Austin on March 27, 2023, 09:41:14 AM

From the NYT today
Guest essay on AI
By Yuval Harari. Tristan Harris and Aza Raskin
   Imagine that as you are boarding an airplane, half the engineers who built it tell you there is a 10 percent chance the plane will crash, killing you and everyone else on it. Would you still board?
   In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems. Technology companies building today's large language models are caught in a race to put all of humanity on that plane.
   Drug companies cannot sell people new medicines without first subjecting their products to rigorous safety checks. Biotech labs cannot release new viruses into the public sphere in order to impress shareholders with their wizardry. Likewise, A.I. systems with the power of GPT-4 and beyond should not be entangled with the lives of billions of people at a pace faster than cultures can safely absorb them. A race to dominate the market should not set the speed of deploying humanity's most consequential technology. We should move at whatever speed enables us to get this right.

The article starts out trying to scare the reader with an argument for the precautionary principle.  In it's purest form, the precautionary principle would result in humans never doing anything new because we can't possibly know the end result.

At least this article doesn't go there.  Instead, it proceeds to describe why the nightmare will remain a nightmare instead of reality.  There are enough qualified people out there asking the questions that need to be answered before we go far enough that the nightmare could become reality.

Excellent article Charles.  Thank you for posting it.  What's your take on it?

Rev. Edward Engelbrecht

Quote from: Dan Fienen on March 27, 2023, 11:35:51 AM
Back in 1942, Isaac Asimov published a short story, "Runaround," in which he set his Three Laws of Robotics:


QuoteFirst Law[/color][/size]A robot may not injure a human being or, through inaction, allow a human being to come to harm.[/size]Second Law[/b][/color][/size]A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.[/size]Third Law[/b][/color][/size]A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



At the time, robots were a science fictional concept, but have since become a integral part of our world. But they have developed differently than Asimov or other futurists envisioned. Rather than general purpose machines that could learn to do a wide variety of tasks, they have developed as purpose built machines that while adaptable are not nearly as widely adaptable as Asimov's robots and with far less personality.


What should, perhaps, concern us is that currently A.I. is being developed without a hint that somehow something like the Three Laws of Robotics being implemented. Ad A.I. develops we need to see about incorporating something like the Three Laws in the mix.


It is on that personality that current concerns over A.I. are centered. The field is haunted by the 1818 novel by Mary Shelley Frankenstein and the fear the man's creation will overshadow and destroy man.


As far as I know, as clever as current A.I. has become, it has not yet achieved true sentience. It is not self directed and able to decide for itself what it wants to do but rather does what it is designed to do. I really do not think that we are close to a Skynet or Matrix scenario where the machines decide for themselves to take over. We do not know where sentience comes from, how a machine could begin to be really self-aware. It is often assumed that self-awareness is an inevitable byproduct of complexity, that the reason why humans are self-aware is that our brains are so complex, much more complex than the most advanced computers yet in existence. Once computers become complex enough, a point often described as the Singularity, they will inevitably become self-aware and may well decide to take over. It is a common sci-fi trope.


This depends in part on the assumptions that humans are merely extremely complex biomechanical machines, differing from our "smart" machines in being organically rather than mechanically based and the level of complexity involved. This is an anthropology which Christianity would dispute.


That does not mean that there are no dangers associated with the current developments in A.I., dangers which the article Pr. Austin posted. But the dangers are more familiar than exotic A.I., Terminator/Matrix, threats. It is the dangers associated with people doing foolish and dangerous things with invented tools that allow people to create ever increasing amounts of havoc more easily and efficiently.


Ever since WW II people have had the means to effectively end all human life on planet earth. We can create with the push of a few buttons a human extinction level event. A.I., in the manner of the movie War Games, can make it easier to do, and more frightening easier to do accidentally, but it is still a human threat to humanity.


As we consider the dangers that recent developments in A.I. pose, the threat of the machines becoming self-aware, self-willed and going rogue is something of a red herring. The real problem is more commonplace but really no less frightening. The real danger is what havoc people will let lose on purpose or even accidentally with tools that they do not completely understand but which can inflict damage ever more easily and with greater effect.


Joseph Stalin, Mao Zedong, and Adolf Hitler did not invent the police state or the idea of controlling people by manipulating the flow of information and propaganda, but they used the latest development in information technology to do so more effectively. It is troubling to think what people today could do along those lines with the information technology being developed.



[/size]

I appreciate Asimov's laws but the words injure and harm need clarification, especially for bots doing surgery.
I serve as administrator for The Lutheran Study Bible group on Facebook.

pearson

Quote from: Dan Fienen on March 27, 2023, 11:35:51 AM

This depends in part on the assumptions that humans are merely extremely complex biomechanical machines, differing from our "smart" machines in being organically rather than mechanically based and the level of complexity involved. This is an anthropology which Christianity would dispute.


On what grounds?

Tom Pearson

SMF spam blocked by CleanTalk