RTO Superhero 🎙️ Empowering RTOs to Thrive!

Beyond the Bot: Navigating AI Ethics in Vocational Education

• Angela Connell-Richards • Season 5 • Episode 38

The vocational education landscape is experiencing a seismic shift as artificial intelligence transforms everything from assessment design to compliance management. This thought-provoking conversation with AI expert Lauren explores the practical implications of this revolution for RTOs.

We dive deep into how AI is already reshaping assessment validation processes, with Lauren sharing how her organisation has redesigned hundreds of assessments to incorporate live video components. This critical adaptation allows trainers to validate a student's written submissions against their verbal capabilities—helping to identify potential AI-generated work when written language significantly outpaces verbal expression.

The discussion takes a cautionary turn as we explore the vital questions RTOs should ask before adopting any AI solution. Where is your data being stored? What security protocols protect it? Most importantly, do the people behind AI compliance tools actually understand RTO operations and regulatory requirements? Lauren emphasizes that no matter how impressive the technology, the human expertise behind it remains essential.

For RTOs yet to begin their AI journey, we offer practical starting points based on your scope of registration. Those in business, finance or IT sectors face the stark reality that students are already using AI extensively. Rather than futile resistance, we suggest strategies for incorporating AI authentically into assessments while maintaining academic integrity.

Perhaps most concerning is Lauren's prediction about the future of compliance challenges. While ASQA has recently cancelled thousands of qualifications due to insufficient evidence, tomorrow's challenge won't be lack of evidence—it will be determining authenticity as AI tools become sophisticated enough to generate convinc

Send us a text

 Join host Angela Connell-Richards as she opens each episode with a burst of insight and inspiration. Discover why compliance is your launchpad to success, not a limitation. 

Connect with fellow RTO professionals in our free Facebook groups: the RTO Community and RTO Job Board. Visit rtosuperhero.au/groups to join today. 

Wrap up with gratitude and guidance. Subscribe, leave a review, and join our community as we continue supporting your compliance journey in vocational education. 

Support the show

Thank you for tuning in to the RTO Superhero Podcast!

We’re excited to have you join us as we focus on the Revised Standards for RTOs in 2025. Together, we’ll explore key changes, compliance strategies, and actionable insights to help your RTO thrive under the new standards.

Stay connected with the RTO Community:

📌 Don’t forget to:
âś” Subscribe to the RTO Superhero Podcast so you never miss an episode!
✔ Share this episode with your RTO network—compliance is a team effort!

🎙 Listen now and get ahead of the compliance changes before it’s too late!

📢 Want even more compliance insights? Subscribe to our EduStream YouTube Channel for our FAQ series on the New Standards for RTOs 2025! 🎥

đź”— Subscribe now: EduStream by Vivacity Coaching

✉️ Email us at hello@vivacity.com.au
📞 Call us on 1300 729 455
🖥️ Visit us at vivacity.com.au

Speaker 1:

with me, Angela Connell-Richards, and once again with Lauren, who's joining me in a great episode, one of our favourite topics, which is AI. Welcome Lauren, how are you?

Speaker 2:

Good, good. Thank you so much for having me again.

Speaker 1:

Yes, it's awesome. So we were chatting offline about this and about AI and where the focus is going in the future, in particular, when it comes to how ASQA are going to audit us and the different direction that businesses are going in due to AI and that type of chat, gpt and technology, in particular, resources and things like that. So tell me what drew you to AI from the beginning and, in particular, how it applies in the LTO sector.

Speaker 2:

Yeah, so I did my first talk on AI in 2020, I think it was 2023, velk was the first like public speech that I did about that. Obviously, we've been using AI like GPT, claw, bard, grok, things like that for that same period of time, watching it, watching its kind of its evolution. And then I was at a conference a couple of weeks ago, the VETQI conference and certainly there was a huge. You know, there was a lot of questions about it, topics about it, and I think when we have VELG this year, the VELG conference I think we're going to see a lot of speakers talking about that. I think there's going to be a lot of questions and you know we've got these new standards in which have a really strong focus on governance, and when I start to look at how RTOs are engaging with AI, how trainers and assessors are engaging with AI, how students are engaging with AI, I just think it's one of those spaces that you know is substantially going to change how we work.

Speaker 2:

You know I homeschool my child and you know I had Zach read a book last week and I said to him okay, I said I want you to sit down and chat GPT, I want you to kind of you know, create a structure for a book report. I want you to cover X, y and Z and then you know you'll need to work with it to refine the report. This is a kid that's got you know dyslexia and you know hates writing. But you know that's a really easy way for him to be able to interact so that he can create a proper report and effectively that's how he would create any sort of report or document in the workplace, right, yeah?

Speaker 2:

So, you know, we're now, then, starting to look at how certainly we're having to look at how we adapt our assessment tools. Um, so, in Annawire we build a lot of our assessments in Accelerate for our clients and we've just redesigned I think over the last two months we've had to rebuild something like 700 or 800 assessments in our client systems because we're now starting to incorporate a live video component. So when we do our validations, we can validate what the student is typing, like their written language, in comparison with what they can produce verbally and how they can talk to a topic verbally. When you can see a student who is writing very academically but when they speak, their oral language would demonstrate an AQF level two to three levels below what they're talking to right, same questions, but the written version and the oral versions just aren't lining up. And that's for us, that's something that we're having to start to do, because really, when it comes down to validation, it boils down to okay, but if you're going to walk the walk, you've got to talk the talk as well, and I think this is something we're going to start to see Now. That easily then starts to bring in things like well, we had to adapt our assessment instructions to go. A live camera is going to come on and it's going to record you and please make sure there's nobody in the background and please make sure you're dressed and professional in your presentation, and you know. So we then start to think about, from a privacy perspective, what can and can't we do.

Speaker 2:

I know when I was at the conference I was talking to Accelerate and they had a client come to them and go look, can we create a software where we can record in the childcare centres and it automatically blows out all of the children's faces and identities, care centres and it automatically blows out all of the children's faces and identities? You know. So AI is impacting all of this right and there is so many different ways in which it affects people. You know, as you know, my little AI Fireflies follows me around to all of my meetings and in most of them she's allowed to participate. But certainly in any of my government meetings I just turn around to him, go look, I've got no choice. She follows me everywhere. Just kick her out, you know. But it's, it's a fantastic tool because you know now, when I used to struggle to to demonstrate all the quality activities I was doing with clients. Now, every meeting is recorded, meeting minutes are produced and then my amazing operations manager, laura, takes every single one of those meeting notes and copies it into the client's quality register. And so whenever we're going to ask for and saying what are the quality activities you're doing, we've got know notes upon notes upon notes upon notes showing that every couple of weeks we're going, we're make, we're talking about an improvement we're making or we're implementing a plan, and it's all there and it's so easy, you know. So I think what ai could do is absolutely brilliant in our sector. Um, it certainly makes it easier for me to make far more innovative and enjoyable and engaging assessments and content.

Speaker 2:

But there's a whole other side of it as well, and you know we were talking about before. You know like you've got an AI product. There's three or four of these AI compliance hubs out there from, you know, little teeny boppers no offense to the teeny boppers out there, but you know like the photos look like photos of my 18 year old. You know who's clearly this little group of boys that have got this great AI knowledge, but now they're saying that they can help you run your whole business, but they've never actually worked in vet before you know. So there's a lot of stuff out there that RTOs are now going to get kind of oh well, you could do this and you can do this.

Speaker 2:

And it's that filtering process. Again, you know our kids growing up nowadays they don't have to. It's not about whether or not they've got access to the information, it's filtering the information. What's the quality of it and where does it come from? And I think RTOs have to be just as savvy. What is the tool that I'm looking at? What is it built on? Who is it coming from? Right, and, and is this actually something that's going to save my business money and give me a return on investment? Or, you know, am I just buying something that I think is going to tick a box when really it's not actually going to make any sort of measurable impact on my business?

Speaker 1:

Yes, so what do you think are the biggest risks for RTOs when it comes to adopting AI?

Speaker 2:

So I was having a really interesting conversation with Vanessa, from Prickly to Sweet, who's like she's been in AI for what? Eight or nine years now, and then I think Kerry Buttery as well, is another one that I've had a couple of chats with, and she's very much operating in that AI space too. Where your data goes is something that I've taken from both of them. Like that's definitely something what data you're being asked to put into the AI and what data you're getting out of it, and how it then secures it. Is it being held in an offshore server or an Australian server? Are there any? That's the biggest question.

Speaker 1:

I get asked.

Speaker 2:

Yeah.

Speaker 1:

When I refer people to Accelerate.

Speaker 2:

I know they've got all all like. They've got 27 different. You know security controls in their system, um, but you know like and I mean I, you know the government's using microsoft, so microsoft and microsoft co-pilot and stuff like that obviously their security and things like that is approved, but there's a lot of others that, like, trying to get that information from them is problematic, you know. So I think that's one thing RTOs need to start to learn to navigate and negotiate. And then the Geico principle of you know people who have built AI systems that don't have RTO knowledge, that can't demonstrate that they've got RTO knowledge. And you know not just necessarily even RTO knowledge. If you're looking at something to run your whole business and your business runs six funding contracts, you know, okay, I can write a policy set for your RTO standards, but there is not a single funding contract that doesn't require you to adapt your policies for that funding contract. You know, in terms of finance, in terms of how you capture your data, in terms of your enrolment process, in terms of additional, you know things that you need to capture from the students or information you need to provide to the students and things like that. So I just think that people need to learn how to have really good conversations with any provider that they're going to look to use any sort of resource from, and I would be super, I would be very reluctant.

Speaker 2:

Um, and I've seen this, I've seen this advertised on a few of these RTO, um assessment development things where they go use this tool and you'll, we can build, you'll, it'll, it'll build your resources in five minutes. It'll build a compliant contextualized assessment in five minutes. And I'm just going to turn around to people and say it's bullshit. Yeah, I have not seen a system that can write an assessment tool in five minutes that's compliant. Most of them are built off of gpt, which is built off of the internet, and the internet is full of non-compliant assessment tools. So you know, if it sounds too good to be true, it's probably too good to be true.

Speaker 1:

So with ComplyHub so you've touched on that the system that we've built, we've actually integrated it with TGA so it will actually bring in the information for the training products that are on your scope of registration. So it does make a big difference. But you know, even just you know, having the unit of competency and downloading that from TGA and putting that into chat GPT, you're not going to get a good assessment tool because it's just going to follow the legislation or the requirements of the training product. It's not going to go into real-world scenarios, it's going to create an assessment against PC 1.1 and one against PC 1.2.

Speaker 2:

And you know what you might actually get like within the next year. You'll probably get maybe a compliant-ish tool, but it's not going to be usable. The assessors are going to hate it. It's going to be overly, it's going to probably be far more academic than it should be for the AQF level. Like there's so many nuances, we still need humans Still need humans.

Speaker 2:

Yes, thank you. Yes, it's a great tool. It's an great tool. It's an amazing tool and I use it every day and my business would be a lot more cumbersome and more expensive without it. I love it.

Speaker 2:

I think it's amazing what it's going to do, but there has to be that human component behind it, and anyone purchasing a service that uses AI needs to talk to the people that sit behind it. So, like, if you're going to go and use Vanessa's AI validator, go talk to Vanessa and see the knowledge that she has. If they're going to use your AI compliance hub, go and talk to you and your team and see the compliance that sits behind it. I know that. You know the learning resources group has got a couple of different AI tools that they're bringing out. You know they've been in this industry and they've actually built good, good assessment tools and this is coming from somebody who builds assessment tools.

Speaker 2:

But, like we've always had a lot of respect for the learning resources group because you know they don't sell these very cheap assessments, you know that the contract requires you to. You know, take seven steps before you get any sort of compliance guarantee out of them and things like that you know they genuinely mean well, they genuinely want to provide a really good quality product, and you know you need to talk to the people that sit behind it and ask those questions, and the governance standards are very clear. Do your due diligence. Know the people that you're working with. Know the companies, know the people that sit behind the companies that you want to associate yourself with.

Speaker 1:

Yeah, yeah definitely, and I think one of the big things around using AI within your like what I see is I need it as part of what we do now as a service. However, you still need to have a good understanding of the requirements of the legislation, the standards, the different other different requirements that you need to have in place for a range of different legislation, and I think someone trying to develop a compliance system who doesn't actually have any idea about compliance is not going to be able to build a good system. Like I'm now able to build a system without a software developer, and it's fantastic because I have built stuff with software developers before. It was always trying to teach them what you do and how to do it, whereas now I don't have to do that. I'm doing it straight, one-on-one with the AI and I'm teaching it and it remembers, which is great, and we're building on that over and over and over again. So it's, yeah, so much better than trying to work with a software developer who's really tech savvy but just really doesn't understand the topic that you're talking about, and I totally agree with you there that you really need to have a good understanding of who you're working with and whoever's building these systems?

Speaker 1:

Do they understand what are the requirements when it comes? Because the last thing you want is to turn up in an audit and your auditor picks up some mistake that your AI, that AI has made, put it at risk, particularly if you're getting assessment tools built and putting your RTO at risk of certificates being cancelled due to not collecting sufficient evidence. Yeah, yeah, definitely so I'm going to move on to another question. For an RTO that hasn't started yet with any AI, what would you recommend would be the first place they should start?

Speaker 2:

I would say it would depend on the scope of the RTO. So I think for those that are operating in more of the like soft services, like business, financial IT, anything of that sort of space, the first thing that you need to know is that your students are 100% using AI now, and if you're not, I can almost guarantee that your assessments are being fraudulently submitted and you're completely unaware of it. So my first step in an RTO like that would be to get the assessors working with AI very regularly, even if it's just helping them to create their session plans.

Speaker 1:

Contextualizing.

Speaker 2:

Yep, contextualizing certain things, maybe you know, getting three or four assessments together and working with AI to do the validation or something like that, just so that they can start to understand the way that AI writes the formatting that it prefers, the spelling that it prefers, the spelling that it prefers, because that's going to make it much easier for them to start to pick up on that with their students and also get them to start to think about how they can update their assessments to incorporate more of AI. Like I know, when we're writing policies now we're writing assessments, now we'll do something like okay, you need to write a policy, you need to develop and implement a policy on X, right? What you're going to do is you're going to work with AI to develop a template all right and develop the draft, and then you're going to go back and forth until you refine it and get it to exactly what it is. You're then going to submit a copy of the chat with your AI and the finalized document, right? That is exactly aligned with what we would expect in the workplace. We're not putting limits on students about working with AI, but we are getting the AI transcript with the students so that we can see what the student's actual knowledge is right. Yeah, and teaching trainers how to like, tweak and adapt assessments in that particular format, I think is incredibly helpful, because we're moving things into how we're doing them in the workplace, which is really what we should be doing in any way.

Speaker 2:

So I think, from that context, that's probably the first thing that I would be doing. If we're working more in a trades sort of a college where we're not as big on computers we've done a lot of stuff in hard copy in the past and stuff like that I would have to tread a lot more slowly with my trainers. I would start by doing things like just giving them a couple of brief overview sessions on what AI is and what it's capable of. I would then probably go into a little bit of how AI is impacting their industry. So like when we think about things like you know brickies and chippies and stuff like that, how much easier it is to create plans nowadays, how easy it is to start to use AI for supporting them with calculations, adapting plans, ready, easy access to like hey, what is this building code?

Speaker 2:

Again, what you know, like maybe just download the GPT app on their phone and get them to start to think about things like that. I would maybe step in a lot more and I would focus more on what is being what, how AI is being used in industry, and then from there, once they start, once that like initial fear is overcome, I would then start to lockstep them into how they can use it as a trainer and take them through that process. So definitely there would be a different process depending on the type, like the scope of the RTO, and if it was more like community services, health services and things like that, I would take a similar approach. But I would also really strongly layer that with, like AI privacy and needing to understand how you know how privacy and protection you know is different now that we have, you know, ai happening in that particular industry.

Speaker 1:

Yeah, yeah. What do you think could be the challenges, like you've just described there, that what you'd recommend, and getting the trainers on board. What do you think the challenges will be when it comes to resistance?

Speaker 2:

Yeah, um, look I, we operate in an industry that is just so constantly changing like. We've had new standards this year. We've got new training package, organizing framework with a completely new structure for our units coming out later on this year, early next year, um, we have a new avet miss system being launched and trialed from next year. Our industry is nothing but like not consistently full of change, and I think that so many staff get change fatigue right, so for them having to look at this and go, so I'm out of touch again.

Speaker 2:

I've got another thing I've got to learn. I think that's probably going to be the biggest hurdle that um RTOs have to comply with, and the same as everything. You've got to take the approach of like look, yes, we've got more things. Like, yes, there's more things we need to consider. But how can we start to use AI to make your jobs easier? So, like, as a trainer, you know ai, like accelerator's got this tool where it recommends feedback. Right, fantastic tool for trainers just to like rather than having to type out the same thing they've only ever typed out the same thing.

Speaker 2:

Good job, well done you know, like, okay, it'll give them.

Speaker 2:

It'll give them a recommended response to the question that correlates to the question, right? So, like, let's, let's talk about the ways that ai can actually make it easier for you to develop a session plan or, like you know, some of these little tools where, like you can click on the button and go give me a summary of this website or automatically give me six questions that I can ask my students about this website, you know, just using a little app and a little button at the top of the screen, right, um, I think we kind of have to hook them using those sorts of things. Um, because the compliance requirements continue to change and update and, as I said, at the moment we've had 25,000 qualifications cancelled right by ASQA in the last couple of months. They've been cancelled because there was a lack of assessment evidence. There was no evidence of competency Over the next five years. That is not going to be the compliance problem. The compliance problem is not going to be. There is not the assessment evidence there. There is going to be assessment evidence there.

Speaker 2:

The question is is how fraudulent it's going to be right and you're going to see qualifications being cancelled because you know they were fraudulently produced out of a company in Manila. Right, they were fraudulently produced out of a company in Manila. Right, they were fraudulently produced by a very smart AI tool that literally was designed to pump out 32 versions of the same assessment, all with minor modifications in them, and then automatically upload them into a learner management system. Right, this is 100% where we're headed At the moment. We can cross-check some of those things, as I said, like with video uploads of the students and cross-comparing. Five years from now, the student will come in, they will record a five minute video of themselves, they will upload it into an AI tool, the AI tool will produce a video of them and you will have video, audio and all the assessment documentation to prove that a student knows how to achieve their Cert 3 in bricklading, how to achieve their Cert 3.

Speaker 2:

So for the bad actors out there, right, they are going to have tools that make it very, very difficult to detect that they're doing the wrong thing, and part of the challenge for the regulator and for those of us in industry that want to see our industry do well is going to be figuring out how we catch them. Yeah, right, and that is going to be the big, big challenge. It's not going to be a matter of going into an rto. You don't have the assessment evidence and therefore they are going to have, for the next three years, they're going to have all of the written evidence that it looks like they've done the right thing. Yeah, right, but five years from now they are going to have the video and audio evidence to support it as well. It's going to be quick. It's going to be commercially viable, right.

Speaker 1:

Yeah, I don't even think it's that far away.

Speaker 2:

We really need to think about, like how we protect our industry and how we can use AI right in the same way. So, like those automated, like you know, a video that turns on and then takes a live screen. You know, live screenshots, as, as we're regularly going through something, you know, validation software, whereby when the video is taken, um it automatically, you know, runs a software program that sits behind it to make sure that no AI, um, you know software is being overwhelmed, but it's going to totally be a cat and mouse game, you know, as it always has been, but that game is going to get faster and faster and faster. So we have to embrace AI so that we can stay ahead of the few bad actors, like the 3% of bad actors in our industry that make the 97% of the good guys, you know. Yeah, drag us down. That drag them down and that make the 97% of the good guys, you know, yeah, drag us down. That drag them down and that make their lives so much harder, and so that we can continue to educate students and go.

Speaker 2:

If you're not getting the skills, don't go and do it, don't go and do it, don't pay for that qualification. It'll get taken away from you. You know, and at least now we can turn around and say 25,000 quals, 25,000 people have paid three to four thousand dollars for a qualification, only to have it taken away from them. Don't be a sap like that. Go and do the qualification, get the actual skills, because that's what's going to lead you into the industry that actually allows you to earn the money. Just you know to do to do the thing, yeah, yeah.

Speaker 1:

And the thing is you don't want to be turning up to work where you're expected to meet those skills levels that were apparently taught to you and you're not able to do that because you know you're putting liabilities on yourself, but also the RTO, when it comes to doing that as well. And I think one of the really important parts and some of the things that you've already touched on if you haven't been in AI yet, you need to. It's a must. As you said, the students are already using it. The trainers need to be using it so that they understand, and it doesn't matter what industry you're in, but once again, I do agree with you Look at how your industry can use AI, if they are. If they're not, how could they use it? And then integrate that into how you're working with the students.

Speaker 1:

And I love that idea of getting the chat back from. What prompts are they putting into ChatGPT and what are they getting out and how much do they actually understand? And the big thing is the workforce. This is the workforce. Now I'm on ChatGPT all day, every day, like it's my personal assistant.

Speaker 2:

You would never well, I mean you would never ask a staff member if you were like, hey, I need you to write this document, this email, this, whatever, you would 100% expect them. You actually don't want them sitting there and doing it all manually. You're like no, I would rather pay you half an hour to do the job using AI than pay you three hours to do the job annually. Like you and I are businesswomen.

Speaker 1:

We have a team account, ChatGPT team account, and all of our team have access to ChatGPT 100% yeah 100%. Yeah, yeah, and I keenly encourage it. We had a new team member join us recently who had never even really used ChatGPT at all, and I actually spent some time training her on how to use it and she was like, wow, this is just going to be amazing, it's just going to make my work so much easier. And I say to her go use ChatGPT, don't spend hours doing it the old way.

Speaker 2:

But use it as a tool. It's not your, it's not a be all and all solution, it's a tool. And so, like I know I'm encouraging my like I continually go back to my staff and go, hey, like I've noticed, with five it is far more prone to telling you what you want to hear as opposed to what you've asked it for. So you know, like we go back to the guys and go, okay, but like I've got to go back to the bot and go, okay, but where did you get that information? Okay, now give me the link. Okay, now, actually, let's go and do this and like, sometimes I'll like three or four times in a row it was like yes, the new, it goes against the new rto standard 5.3. I'm like there is no new standard 5.3, dude, like where are you coming up with this shit? And they would be like, oh sorry, yes, you are correct. I'm like, stop driving to me, stop it. I get pretty upset at it at times.

Speaker 1:

I've had the same thing as like and we've had so one of my team members. He uses ChatGPT for our social media and he's not checking what it's putting out and is referencing the wrong standards. I had to go back to him this week and say, hey, it's happened again. Reminder reminder human interaction.

Speaker 2:

Human interaction required.

Speaker 1:

Yeah, yeah, you can't just have it spit out, and I'd rather he just take the standard out and just talk about what is compliance. He just take the standard out and just talk about what is compliance, and not put the standard in there but go back to the bot. Are you sure? Is this the 2025 standards?

Speaker 2:

Yeah, and, as we said, like at the end of the day, it's about the person that's sitting behind it using the tool, knowing what they need to know, right.

Speaker 2:

That's right, like you and me, you know if we'd seen that you go to me and you go, no, like you know, when the bot turns around and says to me, oh yes, this is against standard 5.3, I'm like, no, no, it is not. Because I know there is no standard 5.3. And you know you have to have that person behind it that knows what they're doing. So if you're an RTO and you want to use a compliance tool, an AI compliance tool, that's fine, that's your call. But please go and talk to the person that sits behind it. Talk to them for five, 10 minutes and see if they actually have any knowledge of the standards or any RTO experience whatsoever. That would be my only suggestion.

Speaker 1:

Yes, yeah, yeah, yeah, yes, yeah, yeah, yeah. And maybe the way you could test that is go in with some compliance questions about maybe things, how you're doing your practices now and so you can identify is it actually going to help you and is it going to streamline, or are you just purchasing a software that's really not going to do anything?

Speaker 2:

No, you know well you know. We don't currently send our assessors out to do direct observations. We use everything based on third-party reports. Is that fully compliant? And if the RTO bot goes yes, I'm sure there's absolutely no problems with that then maybe you need to think twice about whether or not that's the correct tool for you.

Speaker 1:

Well, this has been fantastic. Um, so, just to finish off, if you could give rto leaders one piece of advice when it comes to adopting ai, what would that be?

Speaker 2:

use it as a tool, but only with a human operating it and making the final call at the end of the day yeah, because we, uh, we definitely don't want Starnet to be taking over like the Terminator. Yeah, yeah and this is why I live in Far North Queensland because you know we'll be the last ones to be taken over A bit like COVID. I'd like to jump on a boat before that happens.

Speaker 1:

Yeah Well, thank you very much, Lauren, for once again joining me on the RTO Superhero Podcast. It's always wonderful to have a chat with you and talking about our favourite subject, AI. Thank you very much and I look forward to catching up with you again soon.

Speaker 2:

Thanks, Ange, and thank you very much for putting out this podcast.

Speaker 1:

Thank you Awesome, great work.

People on this episode