Code experiments: A Computer Weekly Downtime Upload podcast

by Pelican Press
65 views 4 minutes read

Code experiments: A Computer Weekly Downtime Upload podcast

There are many who will argue the case for building and deploying a new IT system to streamline a business process. Some are able to calculate the return on investment, but few can say, with a high degree of accuracy, the level of improvement on the way the business process was run before the new software.

Charles Beadnall, chief technology officer (CTO) at GoDaddy, questions the validity of current software development, in terms of whether a new implementation genuinely makes a difference to what came before it.

Beadnall, who has been at GoDaddy for almost 10 years, believes in applying scientific methods to software development. He says: “You have a hypothesis, you test that hypothesis with a control dataset, then you run the experiment and evaluate the outcomes with an experimental group such that you can compare the outcomes against your hypothesis.”

While this idea of running software projects like an experiment seems straightforward, Beadnall says it’s a “bit more difficult in practice because it represents a giant mindset shift for product management and engineering – it’s not something that comes as naturally to how many teams operate today”.

Often, he says, a software development team will be tasked with writing an application “to make something better”, and the team will go off and try to do it. “We kind of hope it works, and maybe we have anecdotal evidence when somebody says good things about it or it seems like it’s working,” he adds.

But if the team has not collected real data in terms of how long it takes for a particular action to be completed, it is hard to judge whether the project has been a success or not. This, he says, is “the myth of the infallible product manager”.

For Beadnall, developers not only need to understand what a piece of code is supposed to do, but they should be in a position to prove that it works as expected. “You should know when you finish writing a piece of code that it actually does what it is supposed to do.”

Image of Charles Beadnall, CTO of GoDaddy

“You should know when you finish writing a piece of code that it actually does what it is supposed to do”

Charles Beadnall, GoDaddy

For instance, if an application is designed to improve a particular business process in some way, Beadnall says developers could measure the amount of time it takes a group of workers to complete the business process without the new application. This is the control dataset, which can be measured against the results when the new application is used.

“You can run the experiment where half the folks do it the old way and half use the new way,” he says.

Once the experiment has gathered enough data such that the measurements are statistically valid, the programmer is then able to draw a conclusion as to whether the application has achieved the desired outcome.

For Beadnall, artificial intelligence (AI) tooling has reached a point where it is easier than ever to start experimenting with large language models like ChatGPT.

“A few years ago, you could handcraft your own neural network. Obviously, tools have streamlined this kind of thing, democratising AI,” he says. “It’s an API [application programming interface] that you can call from almost any program. Obviously, there are some implications of doing that.”

Applying scientific experimentation to AI would mean assessing how well an existing system operates and the improvements possible through AI. Industry adoption of AI appears to be accelerating. It is unclear whether IT and business leaders have the inclination and motivation to test AI deployments using scientific experiments.




Source link

#Code #experiments #Computer #Weekly #Downtime #Upload #podcast

You may also like