Episode 355: Your Team's AI Enablement Guide for 2026

Episode 355: Your Team's AI Enablement Guide for 2026

The request is always the same: which AI tool should we use?

It's the wrong question. The right question is how you move a team from scattered experiments to fluency. From personal tinkering on weekends to workflows that actually reduce friction during the workday.

Eku Malcolm knows the difference because he's built it. Before joining commonsku as Director of AI Operations, he spent four years at OfficeSpace Software turning AI from buzzword into operating principle. The company scaled from $18 million to $42 million ARR while cutting manual work by 30%. Not through tool selection. Through enablement.

His first week at commonsku? He built a custom GPT in roughly 15 minutes that teaches people how to write better prompts. Team members describe their problem. The tool generates an optimized prompt they can use immediately. No expertise required. Just better output for everyone.

That's what 2026 demands. Less tinkering. More systems.

We dive into:


The Real Question Leaders Are Asking

Stop asking which AI tool you should use. The real question is simpler: how do we start without creating chaos?

Eku's first move at commonsku was running a survey. The results showed that 94% of the team already uses ChatGPT. The barrier isn't adoption or awareness. It's integration. Teams know what AI can do. They're overwhelmed by how it fits into workflows that already work, not confused about whether AI matters.

The shift from experimentation to fluency requires guardrails. Not the kind that kill innovation, but frameworks that build confidence so people actually use the tools. Leaders want to introduce AI without triggering fear, shiny object syndrome, or the paralysis that comes from 47 different options.

The mandate is straightforward: help teams make fewer decisions while accomplishing more work.


Generative AI Is This Generation's Calculator

Remember when teachers insisted you wouldn't always have a calculator in your pocket? That warning feels quaint now that everyone carries a supercomputer everywhere they go.

Generative AI sits at that same inflection point. What seemed impossible is suddenly at your fingertips, and the only way forward is experimentation. Use it. Try it. Break things if you need to.

Eku breaks down the critical distinction between generative and agentic AI. Generative AI creates content. Drafts, analysis, ideas. It requires human triggers at the start and human review at the end. Agentic AI runs autonomously in the background, executing tasks without constant human intervention.

Understanding this difference shapes your entire strategy. Fifty personalized emails you need drafted? That's generative AI with a human trigger. A system that automatically sends follow-up emails based on client behavior? That's agentic AI.

Both have their place. The mistake is treating them the same way.


The Platform Decision: Start With One, Then Expand

Teams are split. Some are Google houses running Gemini. Others are Microsoft houses with Copilot. Meanwhile, ChatGPT, Claude, and a dozen other tools compete for attention daily.

Eku's advice: start with one model and let it build for a while.

Most teams already use ChatGPT in their personal lives. That familiarity matters. It'll get you 80% of the way there for most tasks. Whether it's the absolute best tool for every specific use case doesn't matter initially—what matters is adoption and fluency.

Start with one platform. Build enablement around it. Identify where it falls short. Then strategically add specialized tools for specific use cases—Claude for coding, specific models for financial analysis, whatever your outliers require.

The goal isn't perfection. It's progress with consistency.


Guardrails That Protect Without Paralyzing

Customer data is the obvious line. Don't put customer information into AI tools where you can't control how that data educates the backend models.

The solution: paid licenses. Freemium tools sacrifice something—usually your data. When you pay for Gemini Pro, ChatGPT Plus, or Claude Pro, your workspace data stays private. It doesn't train the masses.

Eku's rule of thumb: if you'd be comfortable seeing it on the front page of a newspaper, it's probably fine for open-source AI. If not, lock it down in paid, private environments.

Note-taking tools that jump on video calls perfectly illustrate this balance. Incredibly valuable for capturing transcripts and summaries. But where does that information go? If it's a paid license, you're likely protected. If it's free, you need to know exactly what's happening with your data.

Establish these guardrails early. They create confidence to experiment aggressively within safe boundaries.


2026: The Year AI Gets Tactical

Overnight transformations define this space. Gemini's Imagen release changed image generation completely. What every LLM struggled with suddenly became easy for one tool—and it wasn't close.

Agentic AI continues climbing. HubSpot, Salesforce, massive players investing millions in R&D. They're getting the most use cases because they're pouring resources into solving specific problems at scale.

But here's what Eku sees coming: after AI proliferation everywhere, we're entering a tactical phase. The initial excitement is recalibrating. People are asking smarter questions. Putting up intelligent guardrails. Protecting themselves from themselves.

If your workflow is repetitive, manual, or causing pain—AI can probably help. The doors of efficiency it opens let teams focus on work they want to do instead of work they have to do.

Human intervention will always bookend these processes. Humans start. Humans review. But the middle? That's where AI eliminates friction and reclaims time.


What Our Chat with Eku Reveals

AI implementation isn't a tool problem. It's a people problem disguised as a technology question.

The companies that scale AI successfully in 2026 will do three things well. First, understand where teams actually stand with AI comfort and usage. Second, identify specific pain points that AI can address without creating new problems. Third, establish clear boundaries about what they won't use AI for.

The promotional products industry prides itself on relationships. AI doesn't replace those connections. It eliminates the friction that prevents you from deepening them.

commonsku is betting big on AI and automation in 2026. Both strategic initiatives aim to automate backend workflows so distributors can focus on progressive sales, creative problem-solving, and the client relationships that actually matter.

Ready or not, 2026 is when you stop tinkering and start scaling. The question isn't whether AI belongs in your business. It's whether you'll lead the transformation or watch from the sidelines.


Show Notes: Key Timestamps & Topics

[00:01:51] How to start without creating chaos
[00:07:50] Generative AI as this generation's calculator
[00:08:38] Building the custom GPT prompt architect
[00:11:09] Generative AI vs. agentic AI
[00:14:28] Starting with one platform
[00:16:58] Two pillars: operational rigor and enablement
[00:19:15] Balancing AI risk with guardrails
[00:22:45] From proliferation to tactical implementation
[00:25:38] Three tips for leaders heading into 2026


🎙️ Read Full Episode Transcript
[]

[00:00:00] Music Intro

[00:00:06] Bobby: 2026 is the year you stop tinkering with AI and you start to scale it across your company. So what does that actually look like for your team? How do you scale it? Where do you even start? Well, on today's show, we're exploring what it takes to move your team from scattered AI experimentation to fully integrated workflows.

[00:00:24] Bobby: Welcome to the skucast, the podcast for innovators and maverick thinkers in the promotional product space. My name is Bobby Lehew. I'm glad you're here. Eku Malcolm is our new Director of AI Operations at commonsku. He spent the last four years at OfficeSpace Software, embedding AI as a structural operating principle, scaling the company from $18 million to $42 million ARR, while cutting manual work cycles by 30%.

[00:00:47] Bobby: More importantly, he's doing this work with our team right now in real time as we figure it out together. So today we talk with Eku about what leaders are actually asking for help with when it comes to AI, some concrete examples of what changes, when workflows happen, what tools and resources Eku recommends, and a few predictions about where AI is heading in 2026.

[00:01:09] Bobby: Today's episode is brought to you, courtesy of us at commonsku. Over 900 distributors powering $1.8 billion in network volume rely on commonsku's connected workflow. Process more orders, connect your team and dramatically grow your sales. To learn how, visit commonsku.com.

[00:01:29] Bobby: Now here's my chat with Eku.

[00:01:30] Bobby: Eku, welcome to the skucast.

[00:01:32] Eku: Thank you for having me.

[00:01:33] Bobby: As I mentioned in the intro, we're gonna cover today what leaders need to know about AI moving forward, particularly as we talk about developing AI and making it a priority for our team in 2026. So let's just kick it off with first question. What's the one thing managers and leaders are asking your help with when it comes to AI right now?

[00:01:51] Eku: I think what's really interesting is that people are asking, not so much about the tools, but how to start. And I think we're all at this precipice of understanding AI. We all know that we're supposed to be using it, but we're not sure how to get started with it. And so I wouldn't say that leaders are necessarily confused about what AI is. I think they're overwhelmed about how it fits into their own ecosystem, which is really interesting. And the real ask is, how do I introduce AI without creating chaos or fear or shiny object distraction, that kind of thing? And so what's really important from my side is helping people understand the guardrails we need to put in place to run AI effectively. But then how to move from that kind of experimentation phase to how do we get to fluency? How do we get to enablement at scale? Now, at the end of the day, leaders don't want more tools. They want fewer decisions. So how do we do that in a way that can help them as opposed to something that they feel that there is additional work, which is the challenge.

[00:02:46] Bobby: That's a real challenge right now because in our personal lives we might be playing with all kinds of tools with AI. And then you're trying to figure out how to consolidate tools for the business use case, and then that way you can maximize everyone's education around them, ramp up people faster. It's a unique challenge that we've not had in some time in the market where you have this proliferation of tools and options.

[00:03:08] Eku: Yeah, and I mean, I think that's the thing, the AI ecosystem right now, there's so many options and so what we use personally may or may not be the most effective for a business lens, and that's something that we're also learning and evaluating. You know, ChatGPT was first to market and made it incredibly accessible, but that may not be the tool for your specific use case anymore, right? It may be. It's a good starting point, but I think that's something that we've almost gone from one extreme to no extreme, where we started with one and it was first to market and it was really prevalent. So everybody started there. So now there's AI tools around every block. And so how to pinpoint and to think about, and it's something that we're facing at commonsku is how do we make sure that we're using the right tools for the right tasks, right? And that is an evolving journey. Because honestly, that changes day to day when a new model comes out and you know, they can change the way that images or visuals are done. That's completely different than, you know, three weeks ago, which was completely different. And so my message to leaders is that you're not alone if you're feeling like you can't keep up. And I think that's normal. And so the biggest thing is, you know, starting to do your research and starting to ask questions and starting to think about what is the actual use case that you're looking for as opposed to, you know, do I need a specific tool? Those are kind of questions that I'm starting to ask myself, but I also think are important for our leaders to ask as well.

[00:04:36] Bobby: We're both passionate about this topic, so we're gonna balance all around. So one of the things I just realized too, as leaders, we bring our own bias about AI to the conversation and to our teams. So, for example, someone may be opposed to generative AI, but they love agent AI. Somebody may love generative AI for everything. And so what I am learning through this journey working with AI and talking with other leaders is that everyone brings a bias. The last time I saw this was during the pandemic when everyone went to virtual offices, and then you had this moment post pandemic where people were trying to decide whether to come back in the workplace, whether to have virtual, whether to have hybrid. And what was interesting that I learned talking with so many leaders then, was that whatever your bias was, you're gonna prove it. In other words, if work from home and virtual offices is the best option, you're gonna prove it by putting in the best practices for that. I'm feeling the same kind of energy around AI. Are you sensing that, that we are all sort of bringing our bias to this particular conversation?

[00:05:39] Eku: Yeah. And I think we're seeing the human dynamic at scale, which is really interesting. We're seeing people come out with caution, for lack of a better word, where they're, we're seeing that personality kind of come out and they're hesitant to try new things. We're also seeing the extreme innovators, right, who are at the forefront. And we're seeing it live. And the one thing that I keep coming back to with AI is that the only way that you're gonna get more comfortable with it is you have to use it. You have to try it. I think one of the things we're dealing with just at scale in general is that when people are reluctant to apply, there is a sense of a fear that comes with that. And so, that first barrier is how do we overcome that? How do we make AI normalized so that they're comfortable experimenting? Because then from that point, then they can decide their personal level of comfort, depending on the bias that you're talking about. But I think that first obstacle is, try it, see what is comfortable for you, see what is capable, what they're capable of, and what it's not. And I'm a huge believer in AI and I also think there are very specific tools that I will never use AI for because it's not designed to do that, and that conversation is really valuable to start to have. It's like, how do we start to identify, yes, it's a personal bias, but I'm gonna try it. And then that personal bias is now educated because we all know that when you're educated about a topic, you're more comfortable with it, and then you can make your own decisions as opposed to kind of like the fear of the unknown, which is a lot of people are working through as they're navigating AI right now.

[00:07:10] Bobby: Yeah. I've noticed on the extremes, you have folks that treat AI as binary. It's either all good or all bad. And so on the extremes, I have talked with folks who say, I'm not getting into that. I think it's a mess. I think it's bad. I think it bodes bad for the future. And then there are folks, I probably fall further on this spectrum where they're like, this is amazing. We've seen it do amazing work. And so I've noticed this even in myself that I have to check my bias and understand, you know, that we bring our bias to this conversation. But I agree with you if you are on that side of, I'm never gonna invest in that, I realize you're probably not even listening to this episode, but you know, it's really important to try these tools before you come up with, check your bias, try these tools.

[00:07:50] Eku: Yeah. The reality of the situation is, is if you don't try it, you will get left behind. And so what does that look like? At this point, what I've used this analogy with a few people, and I think it is very present for this conversation, is that using generative AI is this generation's calculator. What we never thought possible is now at our fingertips. You know, people from my generation and older. Oh, well you're not gonna have a calculator in your pocket. Well now we have a supercomputer in our pocket. And gen AI is that same element.

[00:08:25] Bobby: So Eku, tell me about one workflow that we've either built, for example, commonsku's prompt architecture and GPT. Whatever it is you wanted to share, some of the things you've tried to embed in our team or trying to embed right now.

[00:08:38] Eku: Yeah, for sure. So prompt architecture is a big need, especially for people who aren't comfortable in using AI. If they don't architect it correctly, it can lead them down a rabbit hole. It leads to trust issues. So one of the things that I did early on in my time at commonsku is I built a prompt architect using a custom GPT. And what that does is educate in the backend. And I created an element where I said, I need you to be the best practice prompt architect. I need you to self-educate on a monthly basis about what's new. I mean, that is one thing that is near and dear to my heart, I guess, and I'm very confident in saying is no one is staying on top of AI. Not even like, it's my job to stay on top of AI and I'm always feeling like I'm working from behind, right? So I needed to harness the power of GPT to be able to actually say, tell me what I don't know. And so I built the custom GPT with the idea of, I need you to help guide people to create prompts the most effectively that they possibly can. I built in some parameters around self-education, about quality control, about making sure there's no hallucinations, making sure that there is a clear path and a guide of actually saying, this is someone who doesn't know how to create a prompt. All they have to say is describe their problem or what they're looking to do, and then that prompt architect will actually generate a prompt that they can then feed in to GPT and it'll actually give them the specifics to actually go and execute that. And so that's leveraging infrastructure coming from the LLMs directly. Some of them are talking about, you know, the importance of having a role, the importance of having a specific use case, of giving context, right? When you just give a general, hey, please design this, that is challenging when you can say, hey, I want you to think of yourself as this role with this use case, and this is the parameters and the constraints that I want you to do it in. The output is a much higher quality. So I would say that has been effective so far. That was a very early example, and it was, honestly, it took me probably about 10 to 15 minutes to do. So it was easy to create, and now the output of that snowballs because people can use it effectively to create their own prompts in whatever they need to do throughout the company. So it's something that is open to our company and really didn't take a lot of time upfront, but hopefully will have a positive implication in terms of people's output and what they're able to do going forward.

[00:10:58] Bobby: So in a sense, you've calibrated AI to automatically create better output for our team no matter where they are, no matter what their skill level is.

[00:11:09] Eku: The only word I would challenge there is automatically. There is still a point of reference that needs human intervention at the start. They need to actually create, to use the prompt architect to say, hey, I would like this prompt architect to help me design this prompt for so-and-so solution. And the reason why I call you out on that is very important because this is the difference between agentic AI, autonomous AI, and generative AI. Generative AI is all about creating content. It's creating something that didn't exist before, something that's gonna save you time efficiency. The reason why I'm being specific about the word automatic is because that is what agent AI is working towards. The ability to create an agent that will automatically do a task for you. And so there is a difference there that I think is important to call out. Are you looking to create, which is creating a trigger from yourself, or are you using a systematic trigger that's gonna say, hey, I want this agent to run in the background and generate that email and then send that email autonomously without my intervention.

[00:12:11] Bobby: One quick jump back into platforms because we're talking about GPT and custom GPTs. Right now, there are organizations that are kind of split. I would say our audience is kind of split down the middle. Some of them are what I call Google houses, and some of them are Microsoft houses. So some of them are gonna have Gemini tools baked into their normal workflow because they work in Google, they work in Gmail. They work in Google Docs. And then there are other companies who are hardcore, hardwired into Microsoft. And so they have a lot of Copilot and other tools like that. What we are seeing emerging, to keep up with this sort of horse race about who's got the better tool right now is so confusing 'cause one day it's, you know, one day it's this and then it's that and it's just so massive between Claude, GPT, Microsoft, Grok, you name 'em, you just go through the list of DeepMind. You tend to have this horse race, but would it be best for companies to just embrace the tools that are embedded in their normal workflow first? I mean, how do you recommend leaders sort of go, okay, there's 20, 30 tools out there, how do we coalesce this down into something we can use for now?

[00:13:21] Eku: Yeah, it's a really important question, so a quick clarification. Gemini is baked in from a surface level, but most people don't have the Pro edition, so they're not choosing to pay for the enterprise package of Gemini. And so that is a really important differentiation because Gemini at scale, I would just argue, and especially with what it's done recently, if you have that Pro package embedded in your Google workspace is incredibly powerful and maybe enough, but if you're not paying for that additional level of service and that's not including your Google workspace, its benefit is limited a little bit. And so that's something that as, there's a business decision that people would want to have in terms of do they want to pay for the Pro edition of Gemini to be embedded in their workspace? That's one. Okay.

[00:14:06] Bobby: Okay.

[00:14:06] Eku: The second thing that I would say on that is, honestly, I would start with one, and there's so much proliferation out there, but all of them have pros and cons and we can get into that if it makes sense. But what I would recommend is that being conscious of the fact that something like ChatGPT, most of your team is already using it. We just did a survey and we found that 94% of our team is using GPT.

[00:14:27] Bobby: Yeah.

[00:14:28] Eku: And so that's something that they're already comfortable with and it's probably gonna get you 80% of the way there for most of the tasks that you want to create. The question becomes, is it the right tool for those specific use cases? And that's something that's probably worth doing some research on. You can use ChatGPT to do that research, but starting to ask those questions, say, oh, like I want to be a financial analyst, well, maybe GPT is not as good at that. And so that's something that asking GPT directly is like, what are the best tools for this? What I would really recommend is start with one, like start with one model and let it build for a while. Get comfortable with that a little bit and see how that's progressing. And then of course, just like any software, there's gonna be outliers that you need something else for, but start with one and really see what you can get out of it. Do the education element, use the enablement part of it, really try to get your users comfortable with it, and then kind of adapt from there.

[00:15:24] Bobby: Okay.

[00:15:25] Eku: GPT is easy because everybody's already comfortable with it, and we're all using it kind of in our personal lives.

[00:15:30] Bobby: Yeah.

[00:15:30] Eku: But it may not be the best tool, and so that's something that as you get more specific, maybe Claude becomes more valid for coding, for example, or for deeper analysis. But what I will say is that for my journey with AI, I used GPT almost exclusively for probably about a year and a half or two years. And it did most of what I needed it to very well, with a few kind of key elements that I was like, for a long time it couldn't do design and visuals very well. So I knew that and I said, okay, so I'm not gonna use GPT for that, but for anything content that I was creating, it was very helpful. So that's kind of what my advice would be. Start with one and then pivot from there.

[00:16:12] Bobby: Okay. Let's speak a little bit to your role, because this is something that's gonna interest a lot of folks that are listening that haven't done this yet. I've noticed some businesses in the promo space start to do this, where they've brought in a specialist or someone who is going to share a role, but you are commonsku's Director of AI Operations. As I mentioned in the intro, it's a role most companies don't have yet, but what should, you mentioned a survey and that was one of the first things you did. You also instituted an education day for our team every Friday. Speak to this a little bit. What should a distributor look for in someone like you to help bring on an AI expert and kind of share a little bit of your role in what you're doing right now so they can sort of learn from that.

[00:16:58] Eku: Yeah, for sure. So my role has two distinct components to it, and both are very important and neither of them are tool-based to be upfront. So one of them is operational excellence and operational rigor. You need people, as you look at bringing someone like myself into a role, you need someone who understands what an operational workflow looks like and that ability to kind of, this plus this will equal that. There's a tactical element of my role that we're still building up where I am going to be the person who's working directly on building agentic workflows within our company. And so doing that kind of tactical work is gonna be something that's important. So that operational excellence and operational rigor is one key element. The second key element is enablement. And so what you're describing is I created an AI survey, kind of a comprehensive list of these are the questions that I wanted to know about how people are using AI, where they're comfortable, where they're not. We made that anonymous so that we could just get real feedback from our team to say like, this is how you feel about AI right now. And so we came out with some really interesting findings about how people are feeling about AI at commonsku. And so that's been something that what was very important is we needed a data set. We needed to understand what we were working with. The second thing that you mentioned is I created an AI office hours, and so the way that I built this is the first 15 minutes or so is about really sharing wins. What are people experimenting with? What are they having success with? What are the challenges that they're facing? One of the biggest elements about creating an AI culture is creating group enablement and creating that kind of culture of like, I did this with AI. This is exciting. Group think, and these are the challenges I faced. So that was one element. The second element of every one of those AI office hours is a thematic curriculum, essentially, for lack of a better word, where I'm working on a 15 to 20 minute presentation on a topic regarding AI. I've built out kind of 12 topics to start that are early stages, but with the full understanding that this is gonna evolve. And one of the things that, you know, I was talking about even yesterday is like, we need more specifics on how we can use Gemini. It is embedded in our workspace. How do we use it effectively? And so that was an example. There's a tactical element, that operational rigor, and then there's the enablement piece, which is important as well.

[00:19:15] Bobby: What I'm seeing you address, even with our own team is there's this interesting balance right now between risk and also caution. So, you mentioned guardrails a few times. I've noticed there's a tendency for us to want to be very cautious about tools and naturally so, and especially for commonsku, we're a data-driven company, so you know, there's data that we have to keep locked down. But speak to that a little bit. You feel like in this moment in the market, there are folks that probably gonna just 'cause they're biased, they're gonna be more cautious. There are people gonna be more aggressive. How do you feel like right now, even as you watch our own team struggle with it, it's important to have sort of an 80% risk right now because you're trying things and 20% guardrails, or is that too simplistic?

[00:20:03] Eku: I don't think it's too simplistic. I think that it is be mindful of what those guardrails are. So the most obvious guardrail is customer data. Like you have to make a decision of, if you were just using random AI tool, pick your poison. How is that data educating its backend? LLM models are consistently educating themselves and so where's that coming from? So one of the key elements that I would think about is, you know, getting into that paid license can be very important because the thing that I've learned about freemium is that you're sacrificing something, so it's free for a reason. So getting into a paid version of Gemini, getting into a paid version of GPT, Claude, that at least is allowing what you create in your workspace not to be educating the masses. So that is a key element. The second thing I would use is just having that kind of common sense approach is that making sure that when you say yes to connect your Gmail or connect your email address or whatever, is that be aware of what those consequences are. And the best example to me of this is note takers. We've all seen now note takers jump onto calls and incredibly valuable, but it is also something that you have to think about. Where is that going? Like where is the contents going, right? If it's a paid license, you're probably protected if it's not. Where is that information being used? And I guess I always go back to, you know, the analogy is that if you wanted, if you're gonna put something on the website or onto the internet, are you comfortable with it being publicized on the front page of a newspaper? That's always the guardrail that I start with and there's variations of that, but essentially I come back to is like, okay, so from a customer data perspective, I'm only gonna use that in a lockdown environment where I know that it's not going anywhere. If it's something that I'm just asking about normal business practice or you know, something like a standard operating procedure, that might be something I don't care if it's like, it's not unique to how we operate. I'm more comfortable with that and so I might go to a more open source option in that perspective.

[00:22:08] Bobby: Then let's jump to where things are going or where we think things are going. You're a student of AI. I'm a student of AI and for those that are listening, I mentioned this in intro, but don't forget, we have the AI Promo Brief, which is our newsletter that we sort of look at the emerging tools and things that are happening in AI through the lens of the promo world. But since we're both students of AI, where do you see the big changes going in the upcoming year? Do you see agent, do you see design, which in your mind as we move forward.

[00:22:45] Eku: It's a really good question, and honestly, I don't know if I have a good answer for you, so I'll give you kind of my thoughts, my expectations. When Gemini came out with Imagen, it changed the game from a visual standpoint. This was something that every LLM had not been doing well, and all of a sudden, literally overnight, it went from no one was doing this well to one tool was doing this better than everyone else, and it wasn't close. I think agentic is climbing. People are looking for ways to make their lives easier, to make their business life easier, to make things flow more effectively. And there is value and it's able to just consolidate so much more information than you have at your fingertips. You are seeing big companies invest in AI. So whether it's a HubSpot, whether it's Salesforce, you're seeing these kind of big, massive players. They are spending millions of dollars investing in R&D on AI, and so they are getting the most use case out of them. To answer your question most directly, AI is embedded in everything that we do now. Phones, now AI functionality is super important. So I think what you're gonna start to see is as we've gone so far in one direction of like AI proliferation everywhere, you're gonna start to see more tactical approaches to AI going forward. So you're starting to see how people are using it more specifically, they're putting guardrails up because now there's a caution element to it. We are open, but we are now starting to question, and I think this is the human condition, which is a good thing, is that we are gonna protect ourselves from ourselves a little bit. And so now there is, we were so excited by the prospect of AI to start. Now it's kind of going the other way. And so now we have to kind of recenter. What I will say though is the models are changing so fast and in such a positive way. I saw it this morning. Gemini is passing math tests, which it wasn't able to do before, and GPT is connecting to tools that didn't have that access. So it's not really a fair answer, but I think it is the true answer is that it's evolving constantly and so we need to understand if you are in something that is repetitive, if you are in a workflow that is manual, if you are in a workflow that is challenging, ask the question if it can be done with AI, and the answer is probably in some version of yes. Now, human intervention will always be at the start. It'll always be at the end, but a lot of those middle points can be helped or addressed in some way, shape, or form with AI.

[00:25:02] Bobby: I must mention that we have a couple of big strategic initiatives for next year. One is AI and one is automation, both of which are meant to help automate and create intelligence around the backend of your work so that you can focus on better sales and more progressive sales and working with your clients and more creative and things like that. Eku, let's wrap up with this. As folks that are listening are typically leaders in suppliers and distributors, they're leading teams, whether it's large teams or small teams is immaterial, they're leaders and they want, they're gonna head into 2026 and they want to be more aggressive with AI. What would be your final tips to them as they head into 2026?

[00:25:38] Eku: Doing whatever version of that survey for your team, however big your team is, right? Small teams, big teams. But getting an idea of where the landscape is for people in their comfort with AI is a great starting point. Even came to us from our survey recently, there were things that came out of it that people were surprised by with their comfort or their experience with AI. So that's the first step, is definitely understand your perspective. The second thing that I would say is really identify those pain points. What are those pain points in workflows that are causing grief right now?

[00:26:06] Bobby: Yeah,

[00:26:06] Eku: How are people spending their time in terms of ways that they don't enjoy? That's a great place to start. The third element, kind of going back to the guardrails and the caution conversation, but understand what you're comfortable with from an AI perspective. You know, and a great example is sales, right? People look at sales and they say, oh, I need to be in person and I need to actually have a human interaction with my sales team. And especially with distributors, and my understanding of this industry so far is that people really pride themselves on that, and that's amazing, right? So then we have to figure out where does AI contribute to that? But then you're putting a clear guardrail up, which is really important for you and your company to say, okay, we're not gonna have AI do SDR calls, for example. We're not gonna have them do business development. Another one is interviews. So now you could absolutely have a full interview process that's done by AI. It's not as valuable potentially. And so now you're putting up clear guardrails as a company to say, okay, I'm still gonna have a human interaction with a Zoom call or in person interview, whatever that looks like. So starting to think a little bit about what don't you want AI to do and then work backwards is also really helpful. I'll kind of finish the answer with just saying this is that again, if it's something that is causing pain, if it's something that is repetitive, if it is something that is manual, I would be very interested about using AI for it, because the doors of efficiency that it will open, it will allow you to focus your team on the things that they want to focus on as opposed to the things that they have to focus on.

[00:27:39] Bobby: That's a great way to end this. Eku, thank you my friend. You're gonna be back with us again because I think you're gonna be a recurring guest this upcoming year as we go through different models and evolutions as things happen. So stay tuned everyone. We'll have Eku back on. But thanks my friend. Thanks for joining.

[00:27:52] Eku: Awesome. My pleasure. Thanks so much.

 

Previous Post