In today’s episode, my guest is Ted Nielsen. He is the vice president of product management at UKG, which is the combined entity of Ultimate Software and Kronos Group that merged during the pandemic. We are talking about all kinds of things, from classic languages and individual motivations to ethical AI and the future of work.

Ted’s specialty and portfolio are focused on data and artificial intelligence, but his academic background is in classical languages. His passion for classical languages came from his teachers, who were equally passionate about them.

Through Ted’s education in this subject area, he realized how the great works of philosophy and poetry, and all things written by different cultures, all have a few things in common — their wants are the same as ours today. “You realize that people 2,000 years ago, 1,500 years ago, depending on the time period that you specialize in, they’re a lot like us today. They care about the same things. They have the same fears. They have the same desires,” Ted says.

Think about it this way: From 2020 to now, the world has seen a pandemic, uncertainty, chaos, the impacts of climate change, and more. We might agree that these past few years have been some of the worst times for society, but Ted begs to differ. He gives that dubious honor to 536 AD. A volcanic eruption led to darkness, starvation, the Plague of Justinian and more over the following years. So while we can argue that 2020 deserves the award for the worst year, 536 AD has it beat.

The Starr Conspiracy

Punk Rock HR is proudly underwritten by The Starr Conspiracy. The Starr Conspiracy is a B2B marketing agency for innovative brands creating the future of workplace solutions. For more information, head over to thestarrconspiracy.com.

Ethical AI

Through our conversation, Ted talked about what UKG offers and the direction of the new organization. But one unusual topic he raised was the development and implementation of ethical AI. It’s not a topic that you typically hear concerning payroll and benefits, so it had me curious as to how it all fits together.

From Ted’s explanation, it’s “the sort of complexity of running a business, a business at scale. That boils down to, at the end of the day; our software is helping individual people do things that one person couldn’t possibly do.” Some people are using Excel to handle their payroll, and while it may be attainable with a small team, it becomes a bit more complicated when adding more people and business functions, which can lead to human error.

UKG provides a way for their customers to scale while removing the human error that arises from manual processes. The way that Ted and his team are focused on AI is less as a nanny pointing out your errors and more as a sidekick who is there to help you figure it all out. “It’s not the person saying like you did this wrong, you did this wrong, you did this wrong, but a person to help provide more universal context for what’s happening, to help you do your job better, to point out potential errors, potential gotchas that are coming — to make smart recommendations so that people can make smart choices,” shares Ted.

Robots aren’t taking over our world, but technology and specifically AI has given companies the ability to streamline numerous manual processes. Ted explains that the technology is here “to understand that people, with all of their dynamism and with all the incredible flexibility that comes with our approach to problem-solving, aren’t really good at processing really linear data at scale.”

Developing and Implementing Ethical AI

Another way the UKG is developing and using ethical AI services is through a workforce listening product called Employee Voice. This product allows companies to distribute surveys at scale. In one way or another, most people have done annual surveys built around linear data, where employees are asked many boring questions to rate a service or product.

Not many people like having to sit there and answer all those questions, and Ted says Employee Voice that flips that method on its head. “Instead of all of these sort of more linear measures, it presents an open text box and says, ‘How are you feeling? What are you doing? What are you worried about? What are you focused on right now?’”

Typically, people can sit down and read through thousands of responses, but that can be time-consuming. By leaning on this shorter model, users can have dimensional data to efficiently slice and dice. “What Employee Voice does is, it takes that open-ended text information and runs natural language processing against it so that people can understand what’s actually being said, how people feel about what’s being said and build action plans and other sorts of ways to respond,” explains Ted.

Possible Ramifications

While this technology is admittedly astonishing, I’d rather avoid an apocalyptic doomsday with the rise of AI. Is a way for users to give themselves checks and balances and thus avoid making decisions that can negatively affect us down the road.

I was hopeful that there would be a way, but Ted shares that it might not be possible. “I think the answer is actually kind of, ‘no.’ I think that the story of humanity is one of unintended consequences,” he says. What’s needed is to “talk about things from a long-term strategic perspective.”

Through automation and AI, leaders can free up their time to address ethical questions by acting like submarines. Ted explains, “We need to be able to go deep and be focused on the current state and solve our problems, but also able to periscope up and understand where we’re headed,” he says. By removing the need to manage processes like payroll and open enrollment manually, “people can be people, and they can use this sort of critical thinking in the long-term strategic thinking to help periscope up.”

[bctt tweet=”‘You realize that people 2,000 years ago, 1,500 years ago … they’re a lot like us today.’ ~ @ted_nielsen of @UKGInc. Tune in to learn more about ethical AI on #PunkRockHR!” via=”no”]

People in This Episode

Full Transcript

Laurie Ruettimann:

This episode of Punk Rock HR is sponsored by the Starr Conspiracy. The Starr Conspiracy is the B2B marketing agency for innovative brands creating the future of workplace solutions. For more information, head on over to the starrconspiracy.com.

Hey everybody, I’m Laurie Ruettimann. Welcome back to Punk Rock HR. My guest today is Ted Nielsen. He’s the VP of product at a company called UKG, which is the combined entity of Ultimate Software and Kronos Group. They merged during COVID, and they’re UKG now. And Ted is on the show today to talk about, well, all kinds of stuff. Classical languages like Greek and Latin, and the history of mankind, and individual motivation. But we also talk about ethical AI and responsible programming and the future of work. This conversation is just the favorite kind of conversation I have on Punk Rock HR, which is unscripted, no agenda, just talking to someone really smart about the world of work. So if you’re up for that kind of ride, well, sit back and enjoy this conversation with Ted Nielsen. Hey Ted, welcome to the podcast.

Ted Nielsen:

Thank you so much for having me. It’s such a pleasure to be here.

Laurie Ruettimann:

Well, I’m excited to have this conversation with you, but before we get started, I would love for you to tell everybody who you are and what you’re all about.

Ted Nielsen:

No, absolutely. So my name is Ted Nielsen. I’m the vice president of product management at UKG, which is an organization focused on building the world’s finest human capital management and workforce management software. My specialty and my portfolio is focused in the data and artificial intelligence space.

Laurie Ruettimann:

Well, that’s a huge, obscure job. And before we got started a few minutes ago, you were telling me that your family thinks you do printer Wi-Fi and tech support. So is that your background?

Ted Nielsen:

It is not. Absolutely not. Academically speaking, my background is in classical languages. So I have a degree in Latin from the University of Chicago in our classical languages and literatures department. And it was a little bit of a winding road to end up in software, much less B2B SaaS software. But no, I was always the nerd at home. I was always the one trying to get the Wi-Fi to work. Growing up on a farm, as it was, trying to see if you could stretch the signal out to the barn or not. So I did a lot of that, but I never did any tech support professionally.

Laurie Ruettimann:

Well, let’s talk a little bit about your academic background because classical languages is something you see in the Dead Poets Society and on TV, but you don’t really meet people with that kind of background. So what did you love about language enough to pursue it in a university setting?

Ted Nielsen:

No, absolutely. So some of it is absolutely the education I had, the teachers that I had that were super passionate about it, and they shared that passion with me. At a certain point, really in any language, this is not unique to Latin or to ancient Greek, but in any language, you usually start with a textbook written by a person whose native language is your own. So you start reading Latin written by a native English speaker. And eventually as you’ve worked through those textbooks, you leave those behind and you start reading Latin written by a Roman, and those become your textbooks. And those become the things that you study and that you look at.

But what you find is all those books that you read, the great works of philosophy and poetry and the histories, and all the things that were written by that culture, you realize that people 2,000 years ago, 1,500 years ago, depending on the time period that you specialize in, they’re a lot like us today. They care about the same things. They have the same fears. They have the same desires. And in a lot of ways, that granite wall of time, that inability to sort of look back to the past or sort of reach the past, falls away. And you really find this sort of common human story in who people are trying to become, the problems that they face every day. And yeah, I found that really beautiful and really motivating.

Laurie Ruettimann:

Well, I wonder if you can talk a little bit more about that because I think so many of us walk around thinking that this moment in time of the pandemic, of chaos, of climate change is so particularly unique. And I’m comforted to hear you talk about the parallels between 2.000 years ago and today. So what do you find are some of the common themes?

Ted Nielsen:

Gosh, I don’t want to be wrong on this. I’m pretty sure it was [536] AD broadly considered the worst year to be alive in human history.

Laurie Ruettimann:

Wait, followed by 2022. Come on now.

Ted Nielsen:

2020 part one and part two. Yeah. So [this] is during the Plague of Justinian. So already you have a plague. So the Roman Empire is falling apart. You have a society under pressure, lots of different political regimes happening. The current thesis is that there was a volcanic eruption that actually kind of blanketed Europe in fog for the better part of a year. And so it killed the crops and then as crops died, there was starvation. Usually before the advent of modern medicine, starvation generally came right before plague, and then plague happened. So I hate to break it to us, but as horrible as 2020 parts one and two have been, there have been other years where very much have dealt with that.

Laurie Ruettimann:

That’s both comforting and scary because the human —

Ted Nielsen:

And horrifying.

Laurie Ruettimann:

Yeah. The human tendency is to ignore what’s right in front of us. And to focus on things that are outside of our control, whether it’s religion or cult-like beliefs, or even over-indexing on somebody else solving our problems like politicians or figures. We just tend to be in denial. So you’re in a really interesting situation where you’re focused on people working today and trying to solve problems. So how do you take that knowledge from that ancient Greek or that Roman time period and apply it to what you’re doing today?

Ted Nielsen:

So I will tell you every person on my team kind of has a running joke about sort of the strange stories or the little anecdotes that I will feather in. So that’ll always be — or the analogies and metaphors that I’ll bring to bear. I like to think about, and especially in the human capital management space, I think it’s very easy to get caught around the axle of the idea that there are hourly workers and then there are salary workers. There are people who are blue-collar and there are people who are white-collar, and they have radically different needs. They have radically different backgrounds. And the ways that you would address different problems or different challenges that each group has, are super-different. Almost like peas and carrots on your plate, like you make sure they don’t touch.

What I think is so important is at the end of the day, what we’re seeing is they’re all people, they actually have very similar drivers. To approach the idea that a person working in an hourly environment doesn’t want to grow or doesn’t want to develop, or the idea that they don’t have career aspirations — I think we’re moving away from that as an organization and understanding that the operational capabilities that you provide to an organization to help run their company are different from the tools that you give them, that you arm them with, to grow and develop the people that make up that company. And to think about that differently and realize that, under the hood, everybody has similar motivations. They all want to grow. They all want to develop. And what are the ways that we can help organizations run efficiently while also making the people within those organizations grow efficiently?

Laurie Ruettimann:

Well, I wonder as you were talking about everybody having the same primary drivers. The way we define that in the modern HR era is through Maslow and his hierarchy of needs. Do you believe in that or is there a different model that you follow?

Ted Nielsen:

I don’t, personally, I think it’s very attractive to find patterns or theses for how people think or how people behave. We’re pattern-driven people. We want to easily categorize people. And unfortunately, when you categorize people, you tend to pigeonhole them. Once a person is in a category, it’s really hard to jail break and sort of get out of it. I’m not a huge fan of that purely linear model of thinking. And frankly, it’s very similar to how people think about history — that there’s this unbroken line of progress from deep time, from the great pyramids to the Pizza Hut and then from the Pizza Hut to the iPhone and iPhone to Bitcoin. Like this sort of unending up into the right line of life just getting better for people. And we know that’s not true. We know there are setbacks, there are loops, there are ups and downs, different areas advance faster than others.

And so, time is much more cyclical and people are more cyclical. And so an individual person in a career may make a trade-off for, ‘I no longer want to be in this particular leadership role. I want to care more about my individual contribution” or “I want to emphasize my family life more.” And so take a step back or take a different job. Or I may be particularly interested in growing and developing my financial security. So I may go to a high-risk job that has a high-reward potential with a commission or other sorts of things. People change and people’s needs and drivers change over time. I’m not wild about very linear or highly structured models for human behavior, because I think we’re not that simple.

Laurie Ruettimann:

Well, I like that you thought that we’re moving from the iPhone to Bitcoin is a fun example to use because I think Bitcoin might be a blip. We all recognize that currency is just a human creation. And we’re all over-indexing on this idea of whatever that version of Bitcoin is being the new currency. But one of the things that strikes me is we never get pay right. Whether it’s dollars euros, rocks, Bitcoin, we just don’t nail compensation. Do you have any thoughts on that? I mean, whoa, what a mess.

Ted Nielsen:

Absolutely. So it’s funny, pay is one of those things that from an accuracy perspective is critical — to the penny accuracy for billions and hundreds of billions of dollars. It’s silly to say, but it’s kind of table stakes. And it’s so complicated as, obviously, legislative realities are changing, individual preferences — everything from the deductions that people take and the bonuses that are available to them, different contracts that might be in place from a union. Pay is a space that complexifies very, very quickly. I used to think sort of jokingly, as I was sort of entering the human capital management space, how hard could it really be? It’s a rate times the number of hours minus some taxes and deductions, bada bing, bada boom, it’s a calculator. Like how hard is this? I was wrong. I was very, very wrong on that. But what I will say, we get obsessed and I think focus on — pay is a great example of this.

We are focused on the operational needs of an organization. The way that we can help them run. And I will tell you absolutely being right about payroll and being right about your timesheets and the right benefit deductions being pulled out of your paycheck. All of those are extremely important, but that’s just keeping the train moving. That’s just running a business. And I think all of us, especially in the last couple years, living through this sort of dramatic change, realize that we cannot just run our business. We also have to prepare our business for what’s coming next. We have to think about how our people contribute and sort of put them in new situations that grow and stretch them.

The organization that I’m a part of, UKG, is made up of two legacy organizations that went through a merger right in the middle of 2020. I will tell you doing a merger over Zoom, I do not recommend it for anybody’s mental health, but we pulled it off. Two 6,000-person companies with tens of thousands of customers, millions of lives under management, went through an incredible merger, Ultimate Software and Kronos incorporated. And everybody’s job at the company got twice as hard overnight. And there’s no way that you can prepare for that. There’s no gym that you can go to. There’s no training program. There’s no learning module that you can distribute because your job — even in place, just running in place — your job got twice as hard. And it didn’t happen linearly. It happened overnight for everybody.

And so I think about those operational needs that exist for an organization. Doing those well and hitting them out of the park so that frankly, people don’t have to spend time managing them. The more that you can automate and just get those things to work, you can focus on frankly, the really important parts of your job, which are all the other things that are coming your way.

Laurie Ruettimann:

Well that’s a really beautiful segue to what I’m so excited to talk to you about today. When we met, you were talking about some of the offerings of UKG, the new organization and where the company’s going, but then you started talking about something that really caught my attention, which is the use, the implementation, the development of ethical AI. And I thought, whoa, you don’t hear this often at a conference about payroll and benefits and all the other stuff that we were just talking about, operations. So tell me, where does ethical AI fit into all this, but more importantly, what is it?

Ted Nielsen:

So when you think about, as I just said, like the sort of complexity of running a business, a business at scale. That boils down to, at the end of the day, our software is helping individual people do things that one person couldn’t possibly do. Theoretically, given an infinite number of human beings in an infinite amount of time, you could, by hand, calculate every single paycheck in your organization.

Laurie Ruettimann:

Good Lord, don’t go there because I’ve seen it. I’ve seen it. It’s terrible, 1995.

Ted Nielsen:

There are people right now who are doing this stuff out of running payroll out of Excel. So at the end of the day, we provide scale and surety and sort of take that human error that comes from these manual of processes out of the equation. Now, unfortunately, that equation gets harder and harder and harder as you have more and more people, more and more business functions to care about. And so artificial intelligence is, I think, there’s a lot of different ways to implement it. There’s a lot of different ways to think about it from a product management perspective. But the way that my team and I are focused on implementing it and bringing it to life for UKG customers is, it’s a sidekick. It’s not a nanny. It’s not the person saying like you did this wrong, you did this wrong, you did this wrong, but a person to help provide more universal context for what’s happening, to help you do your job better, to point out potential errors, potential gotchas that are coming, to make smart recommendations so that people can make smart choices.

Because at the end of the day, our goal here isn’t to get rid of people and have robots rule the earth. It’s to understand that people, with all of their dynamism and with all the incredible flexibility that comes with our approach to problem solving, aren’t really good at processing really linear data at scale. One of the offerings that we have at UKG it’s a workforce listening product called Employee Voice. And so this allows you to distribute at-scale surveys. And I think most people in some way or another have done annual surveys, and they’re generally built around linear quantitative data. One to five, how much do you hate taking a 230-question survey? One to five, what page do you think you’re on? Sort of the very, very question after question after question. Employee Voice flips that on its head.

And instead of all of these sort of more linear measures, it presents an open text box and says like, “How are you feeling? What are you doing? What are you worried about? What are you focused on right now?” One of any number of questions. And it accepts that sort of block text. Now, normally a person could sit down and read 5,000 responses to that or 10,000 responses or 50,000 responses. That’s hard. And so what people do instead is they lean into these one-to-five question models so that they have dimensional data that they can slice and dice. What Employee Voice does is it takes that open-ended text information and runs natural language processing against it so that people can understand what’s actually being said, how people feel about what’s being said and build action plans and other sorts of ways to respond.

Laurie Ruettimann:

One of the interesting things about software that’s being developed today, solutions that are being developed today, is that oftentimes the person who’s using it is both the user and part of the product. And the software works against us, not for us, or it works in ways that aren’t necessarily transparent, which I think is the antithesis of what we want: a future state of just life. So I wonder where the ethics of all this comes in, because I want to feel comfortable that whatever I’m writing is going to be used for the benefit of me, the benefit of my development, the benefit of the organization and not programmatically down the line to doom society, or even doom my own career. So where does the ethics of all this come into play?

Ted Nielsen:

Absolutely. And so part of it is both how the data is used and how we evaluate the algorithms that we’re building because I think both are super-important. So in that example of Employee Voice, it’s actually only used in aggregate. So that we know that people will feel comfortable about what they’re saying and be more honest. They do that in an environment where they know no matter how hard a person tries, they’re not going to figure out that Ted is the person complaining about this problem or voicing a concern about this problem.

Anonymity is the best way to ensure that people feel comfortable sharing their opinion. And so that’s an example of how you use the data. Another example is in something like candidate match, which is a capability built into recruiting, which is our applicant tracking system software. And this allows us to compare a job description and kind of all the rich skills and histories and expectations for a given role and the resume that’s been submitted as part of an application for a candidate to compare. As I said, everything from their skills, their history, the education level that a person has. What it helps with is, it doesn’t look at something like a person’s ZIP code, which can tell you a lot about a person’s race and socioeconomic status. It doesn’t look at a person’s name which can tell you about their gender and potentially also their race.

And so using things like candidate match, it actually helps organizations interrupt bias in the recruiting process by evaluating purely on the basis of skills, background and competencies. And what’s even more important is, not only are we helping do that as part of the transaction, but after the fact, we are evaluating the quality of the model ourselves, by looking at the results of people who actually do in fact go through the recruiting process, who do in fact get hired, to make sure that up to the EEO-1 standards, we’re not adversely affecting any one of those protected groups. So it’s not just how the data is used, but it’s how we look at and measure ourselves for how the data is actually creating change in an organization.

Laurie Ruettimann:

Well, it’s so interesting because when I think about ethical AI, I think about the algorithm and the platforms being designed in a specific way and also organizations and individuals making a choice on what they’re looking for and how they interpret data. Is it possible for organizations to flash-forward into the future and build technology in such a way so that we’re purposely not re-engineering data to do negative things with it? Because I worry that a choice around an algorithm, around capturing data is a moment in time. And I want to make sure that we’re setting ourselves up for success, for subsequent ethical decisions down the road. And I’m just going to ask you, does that make sense, what I just asked you?

Ted Nielsen:

So, let me rephrase it to see if I caught it. So I think it’s a classic problem of the kind of the frog in the pot. Every choice that we make leads us down a certain path, or it brings us to a certain point in time, and we don’t always realize as it’s happening kind of where we are until we have an opportunity to take a step back and be like, wow, this water’s really hot. Or how the heck did we end up all the way over here? We wanted to be over there.

Laurie Ruettimann:

Right. Oh my God, it’s Westworld.

Ted Nielsen:

Well, there’s that, too.

Laurie Ruettimann:

We got a bomb. Yeah. It’s terrible. Yeah. So I want to avoid, like, an apocalyptic doomsday scenario and I worry about some of the design of this technology. Is there a way to design it in the moment where we’re giving ourselves checks and balances so that we’re not making decisions that affect us down the road in a terrible way?

Ted Nielsen:

Sure. And I think, and I’m actually going to lean into my background in history and Latin there. And I think the answer is actually kind of, “no.” I think that the story of humanity is one of unintended consequences. And so what I think it does take, and frankly, it’s part of the skill of doing product management really, really well, is at the same time that we always talk about things from a long-term strategic perspective. You’re always talking about the forest and kind of what’s happening at that macro scale. But at the end of the day, we are transacting tree by tree. We’re thinking about this particular feature or this particular context. And so in a certain sense, people in a leadership position, both in the entire generation of software vendors that we have today, as well as the organizations that are implementing them, they need to be a little bit like submarines.

Ted Nielsen:

We need to be able to go deep and be focused on the current state and solve our problems but also able to periscope up and understand where we’re headed. And I think this gets back to the point that I was making about AI helping organizations. Because the more that we handle the transactional and the operational on behalf of people and make it easier to close payroll, easier to do open enrollment, all those hair-pulling things that happen year-end. The more that we take that heavy effort away from them, people can be people, and they can use this sort of critical thinking in the long term strategic thinking to help periscope up.

Laurie Ruettimann:

Well, you’re a student of history clearly and you know, you’re right. I mean the incentive structure out there is for people to make short-term individually favorable decisions, pretend like they’re thinking strategically, but then set us up in history for a lot of failure. So I’m really glad we have you doing what you’re doing and your team to keep an eye out on trends and issues. And to really help us with humanity. Ted, I just have this strong feeling that work technology is so important for the future of humanity. This is how we identify, this is how we contribute. This is how we make improvements to the world. And it’s all being done through these very important systems that are being created right now, right before our eyes. And if we don’t do it with thought and intellect and care and compassion, I worry that we’re not setting up future generations for success.

Ted Nielsen:

Absolutely. Especially as all of those parameters are literally changing day by day. Looking at a society where there might be something like a universal basic income in place, where 20% of today’s jobs might be automated in some fashion or another in the next 10 years. As I said, at the very beginning, this is not a blue-collar-versus-white-collar thing. Two of the professions that are probably the most under pressure in my opinion, from artificial intelligence, aren’t jobs like those in the fast food/hospitality industry or in warehousing, it’s doctors and lawyers. It’s document analysis and image analysis for MRIs and assisted diagnosis and automated contract development and the things that, frankly, are already available. They haven’t been productionalized, they’re not at scale yet, but I think we’re at this very interesting junction.

Laurie Ruettimann:

Well, and it’s also human resources business partners. The bread and butter of HR. Individuals who would hear from one side and another and make a decision. All of that can be mediated through software, through technology. Things that I did as a human resources leader 15 years ago are now done through the UKG platform. So you listen to commercials all the time. You can get an HR manager for $9.99 a month. And even if it’s $999 a month, it’s still cheaper than an HR manager’s salary. So I love that you’re thinking about UBI and automation, what else are you thinking about? What’s on your radar for the future of work?

Ted Nielsen:

I think the future of work by definition, it’s so easy to think at a massive scale. To think at a scale of the entire workforce or demographies of countries or regions. What I think is so important to forget is ,the future of work is extremely intimate. It’s personal, it’s every single human being and their contribution and the way that they help sort of fulfill themselves and make organizations better. And so understanding how we can provide, again, those kind of macro tools that support big-scale problems without losing sight of the fact that there’s a person in there, the human in human resources, the people that you’re there to support. I think one of the things that I’m particularly interested in is, most organizations, just like people have unique problems. The industry that you’re in, the region that you’re in, the competitors that you have because of your scale, et cetera. Everybody’s like a thumbprint. Everyone is a little bit unique.

When you look at vendors, particularly in the cloud space. So these are folks like the Amazon and in the Google Cloud platform and Azure for Microsoft. So many of them are investing in low-code or low-threshold machine learning to sort of democratize or to citizen data scientists that can kind of lower the threshold for people who don’t necessarily have the background in data science or the expertise in data science to build and develop some insights. And I was actually talking with my team about why we think this is happening. And I think it’s actually a reflection on business applications. This entire generation of business applications so far doesn’t, in fact, have the insights that these companies need. And so they’re going out to try and figure out how to get the insights to make smart decisions themselves. There’s a study done by Accenture, as a consequence of the pandemic, that only 17% of executives stated that they felt they had enough information to make smart decisions during the pandemic. Fewer than one in five.

And so you have all of these organizations — nowhere near enough data scientists to grow around, let me tell you. And so you have all these organizations who are trying to self-serve, who are trying to survive in a world that is changing day by day. And they’re out there using these low-code tools to try and build and get answers to these critical, critical questions. Because those questions aren’t generalizable, they’re not fully universal. They have to do with a corporation’s size and industry and who they’re competing with and what’s going on and how healthy they are internally and how well they’re retaining their employees. And so, I think, as I said, the future of work and the way that that is changing, I unfortunately don’t have a great crystal ball. But what I can say is, as those large-scale, those macro changes are happening, we have to remain focused on the fact that those changes are happening to individual people.

Laurie Ruettimann:

Well, spoken like a man who understands history. I love that, focusing on the individual. Well, Ted, it’s been a real joy to kind of go through this journey of me not really knowing what I’m talking about with programming and historical languages and ethical AI, but I feel a little bit smarter and maybe a little bit more reassured and also scared about the future at the same time. So tell everybody, if you’re a data scientist or a human resources professional and they want to get in touch with you, where do they go? How do they find you?

Ted Nielsen:

The best place to find me is on Twitter. I’m at Ted_Nielsen, N-I-E-L-S-E-N. I will sometimes put funny things about jokes from classical languages and stuff from data science and AI. It’s a real grab bag, but that’s the best way to get in touch with me.

Laurie Ruettimann:

I love it. Amazing my friend. Well, thanks again for being a guest today.

Ted Nielsen:

Absolutely. Thank you for having me.

Laurie Ruettimann:

Hey everybody. I hope you enjoyed this episode of Punk Rock HR. We are proudly underwritten by the Starr Conspiracy. The Starr Conspiracy is the B2B marketing agency for innovative brands creating the future of workplace solutions. For more information, head on over to the starrconspiracy.com. Punk Rock HR is produced and edited by Rep Cap with special help from Michael Thibodeaux and Devon McGrath. For more information, show notes, links, and resources, head on over to punkrockhr.com. Now that’s all for today and I hope you enjoyed it. We’ll see you next time on Punk Rock HR.

1 Comment

Comments are closed.