Unlocking the Power of Artificial Intelligence

Spring 2023

NSPE Now: Outlook
Unlocking the Power of Artificial Intelligence

By Britt Smith, P.E., F.NSPE, President 2022–23

Britt Smith, P.E., F.NSPE, President 2022–23ChatGPT has brought artificial intelligence (AI) to the forefront of the news lately. While NSPE has always been involved in emerging technology, it is clear to me that we must be involved in the growth of AI to ensure safeguards are in place and that ethical standards guide and govern its development and deployment.

A newly released version of ChatGPT (ChatGPT-4) took the bar exam and scored in the 90th percentile, according to recent news reports. Upon learning this, my first thoughts were filled with the many lawyer jokes this could spark but, in deference to our friends in that profession, I will not go down that path. On a more serious note, it did make me wonder what this development potentially means for the profession of engineering and to PEs specifically. So, I did what I thought was the most logical first step, I asked ChatGPT the following question: Will AI be able to design work currently being done by professional engineers?

The answer: "AI has already demonstrated remarkable ability in various fields, including engineering. However, it is important to note that design work currently being created by professional engineers involves a high level of complexity that goes beyond what currently AI is capable of….That being said, there is ongoing research into developing AI systems that can design complete systems… However, it is unlikely that AI will completely replace professional engineers in the near future, as the human element is still crucial in the design process."

I was a bit surprised by the answer and definitive nature of the response, but after doing more research on the subject, the answer seems to be in line with the thinking of those developing the technology.

Using artificial intelligence and machine learning can provide us with a powerful tool to augment our work. In my own case, I work for a small city public works department that has benefitted from this technology. Through partnership with a local university, we recently used AI and machine learning to evaluate the condition of the pavements of our road network. The system identified pavement distress and severities using video images. Normally, this type of effort would take a human using a somewhat subjective standard many days or weeks to accomplish. These innovative tools permitted us to complete this project in a fraction of the time and with results that were objective and easily repeatable.

This is just one example of how AI can improve our work. There are thousands of other examples that exist in the areas of design optimization, remote inspection, and new product testing. But just like with each past technological innovation developed for our profession, the rollout and use of AI must be done with intentionality and safeguards. By doing so, we ensure its deployment is done ethically and responsibly. We, as the end users, must know and understand its capabilities and limitations.

What does this mean for the profession of engineering? As stated in our creed, we must hold paramount the protection of the public health, safety, and welfare. So, if this technology is upon us, how do we provide that vital protection? So, again, I asked ChatGPT a question: If AI does advance in engineering design, how will we ensure that ethical standards are maintained in the process?

The answer: "…maintaining ethical standards in AI-driven engineering design will require a combination of guidelines, regulation, oversight, and human expertise to ensure that these systems work in the best interest of society and do not create unintended harm."

In a recent episode of the 60 Minutes news program, Sundar Pichai, the CEO of Google, stated, "There has to be regulation. You’re going to need laws…there have to be consequences … Anybody who has worked with AI for awhile…realize[s] this is something so different and so deep that, we would need societal regulations to think about how to adapt."

Additionally, the Future of Life Institute, a nonprofit organization that works to reduce global catastrophic and existential risks facing humanity (particularly exponential risks from advanced artificial intelligence), published an open letter stating in part, "…Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable…"

I agree with these assessments. As I have read about and researched these issues, it has become abundantly clear that professional engineers must engage in the evolution of AI.

Over the past three years, the NSPE Software Professional Certification Task Force has worked with subject matter experts to find ways in which we can be part of a solution being called for by the computer industry. Our goal is to create a certification process that effectively evaluates competencies and ethical standards for developers. This system could be a piece of the safeguards called for by Google’s CEO and many others who are working in the development of AI.

Professional engineers must be involved in the development of AI. It’s also critical that we ensure that the development of AI regulations makes protection of the public a top priority.