is now [learn more]
PODCAST

Explaining how Kafka works with Robin Moffatt

Since Kafka was originally developed at linkedin in 2011, it has gone on to become the de facto standard for large scale messaging and event processing. While Kafka is open sourced and managed by the Apache Software Foundation, the original co creators of Kafka went on to form confluent to offer commercial services and features on top of the Kafka platform. In this episode of cocktails, we talked to a senior developer advocate from Confluent about Apache Kafka. The advantages that Kafka's distributed pub sub model offers how an event processing model for integration can address the issues associated with traditional static data stores and the future of the event streaming space.

Transcript

Aaren Quiambao

Welcome to Coding over cocktails, a podcast by Toro Cloud. Here we talk about digital transformation, application integration, low code, application development, data management, and business process automation. Catch some expert insights as we sit down with industry leaders who share tips on how enterprises can take on the challenge of digital transformation. Take a seat. Join us for a round. Here are your hosts, Kevin Montalbo and Toro Cloud CEO and founder David Brown.


Kevin Montalbo

Welcome to episode 40 of the Coding Over Cocktails podcast. My name is Kevin Montalbo. Joining us from Sydney Australia is Toro Cloud CEO and founder David Brown. Hey, David. 


David Brown

Good day, Kevin. 


Kevin Montalbo

All right. And our guest for today is a senior developer advocate at Confluent. The company that was founded by the original creators of Apache Kafka. He is a top rated speaker and has been speaking at several conferences since 2009 including QCon, Devoxx, Strata, Kafka Summit, and Øredev. Joining us for a round of cocktails is Robin Moffatt. Hey, Robin, glad to have you with us.


Robin Moffatt

Hey, thanks for having me. Great to be here. 


Kevin Montalbo

All right. So we want to start by asking you about your background at confluent. So what does a senior developer advocate do? And how did that experience lead you to become part of the Kafka Summit program committee?


Robin Moffatt

So a developer advocate, it's a funny role because it, you know that meme that you see on the internet where I can like there's the, what I think I do what my parents think I do what my coworkers think I do. And it's like there's a different one for all different roles, but people's perception develop advocates. If you follow them on Twitter or whatever is like they get on airplanes, they go to all these fancy conferences, do all this stuff and amongst the developed a community, everyone's complaining about like jet lag and like the airline lounge sucks and all this kind of stuff. But in reality, and particularly since COVID, what develop advocates do, it's all about advocating for and to developers. So it's about working with developers to help them get on their journey with whichever software or technology or concept that's gonna  benefit to them and usually for the, the, the company or the, the foundation or whatever it is, that's, that's employing the advocate. 

So I work for Confluent, like you say. So I've got a a big interest And I've got a big passion for Apache K, which is like how I got into this role. So it's about helping developers, architects understand what Apache Kafka, what Confluent can do for them and how to get the most out of it and also where it doesn't work for them. So it's, it's not about, it's, it's one of these funny things. It's not marketing, it's not sales, it's not engineering, it's not product, it's kind of all of them and none of them and then other stuff as well. So it's helping them understand what it is. So there's a sense of kind of like you're just doing a talk to promote it, but it's also helping them understand like what it isn't and what they can use it for and what they shouldn't use it for and just helping them have a happy time with it. If it is going to be the right thing for them. So that, that's fine, like an advocate does and then s-


David Brown

Sorry to interrupt you there, Robin. Did, did I hear you say you also represent the developers, like almost as on behalf of the developers to complement it? Like presumably this is what the community is asking for. This is the feedback they're giving me. Do you take on that sort of role as well?


Robin Moffatt

Exactly that. Yeah. So that's kind of a really deliberate part in, in my opinion of being developed advocate. And I think any good one will have that ability to actually go back to the company or whoever is in kind of employing them to be able to say, look, this thing here doesn't work or like you maybe think this thing is great, but no one wants it. What everyone wants is this. And obviously that's kind of like, well, everyone's got their opinion, haven't they? And so kind of product and engineering, they do a great job of kind of like working out well what's appropriate for us. But for a developer advocate, it's about saying it's not about saying here's our latest feature and this is why you must use it. It's about saying here's this cool thing that we've just developed, we think it'll help you like this and developers might say, no, it won't because XYZ and you take that back to the product managers and say So the feedback we're getting looks like this and, and that's really, really valuable feedback cos that's not just from kind of internally, that's actually from people using their software day to day saying like these are our pain points. So a good develop advocates, always listening, always feeding back. It's not just kind of like telling people stuff, it's like helping people with stuff outbound, but it's also gathering that feedback and taking it back. Also, so that's a key part of the role.


David Brown

And like you said, in a pre COVID world, you would have been doing that presumably in conferences and the like you would have been doing a lot of traveling. Sohow does that feedback loop occur a post COVID world? Is it is a community forum?


Robin Moffatt

Yeah. So there's lots of conferences and, yeah, definitely. So I suppose pre-covid I was traveling maybe about a third of the time and there's, there's conferences, there's meet talks, but I've always been a big fan of kind of like the online world like way back in the day. And I'm showing my age like I was into IRC and stuff like that. And I always found it fascinating the way people can connect online like that. So I guess nowadays there's lots of Slack workspaces, there's discourse forums, we set up one of those recently at conference. we launched that late last year. And that's again, been like really useful for bringing developers together. And kind of like just going back to the advocate thing as well. It's also about meeting developers where they're at. So like Confluence, we've got a Confluence Slack workspace, we've got a forum, but there's also things like there's Twitter, there's REDDIT, which is just like developers are there. So advocates will kind of like go and meet them there. 

So it's, you get a lot of feedback just chatting to people online and sometimes it's public conversations, sometimes it's a lot of time you'll have very fruitful conversations from someone complaining about something publicly. And like we all like to take to Twitter and like rant about stuff, but actually following up with people on that privately and not like those anodyne company responses like, hey, sorry, we heard you had a problem. Can you give me your account number? Blah, blah, blah. It's just like, that's like a corporate boring, like awful thing. But actually say like,, that sounds like that sucked. Like what was the problem there? And sometimes they'll ignore you. Sometimes they'll just like, be cross but a lot of the time it's like,, well, actually here's the issue and you can actually help them through even if it's just working out. Well, here's the Docs page. Maybe you missed out and then, then they have a happy time and if there isn't docs page, then that's good feedback to take back to the team and to me and say, like this thing is kind of like unclear and perhaps we should document it better.


David Brown

That's hard. Cos you're almost getting dragged into becoming technical support.


Robin Moffatt

Yeah. And it's, it's a funny fine line between like how much of advocacy is actually just supporting people on the forums. Cos in a sense if it's, if all you're doing is actually questions on a forum that is just support, but that's also building up your knowledge of the area. And one of the things that I've found quite useful is answering a bunch of stuff, gives you an understanding of the areas that people have pain in which then actually gives rise to really useful conference talks. Cos if people are always having trouble with this asking about it and then you can write a conference talk which talks about that thing, you can like guarantee an audience because everyone always has problems with this stuff. So you kinda like your, your, your conference talk almost writes itself cos it, it defines the area that you want to talk about.


David Brown

And who would you report to within the organization? Would that like be a product owner or the developer team? Would you get involved in the sprint planning itself? How do you, how do you bring that feedback back?


Robin Moffatt

So, I mean, develop practice again, kind of like the well developed relations as a whole as discipline. It tends to vary between it, reports into marketing or it could report into product of Confluence. It's kind of like it kind of moves around over time. It's reported to both over time. Usually it's one of those, sometimes it's sent under, under engineering. But a lot of the time that feedback loops comes from building up relationships with the product managers, with the engineering teams and just going back to them directly and saying, like, look, there's this thing. and I guess as companies get bigger, maybe that gets more formalized. But certainly smaller companies, it's just like reaching out directly and building that personal relationship.


David Brown

Well, I guess we should start talking about Kafka. So look what advantages to begin with. What advantages does Kafka offer being a distributed publish, subscribe system versus a traditional request response model?


Robin Moffatt

So Kafka, the way I like to put it is events, let us model what's happening in the world around us. So like events like describe things on an event is like something happened and what happened at what time did it happen? And that's how a lot of the data we work with originates and by modeling and working with data in that kind of way, then you're actually capturing it in a very kind of like there's very low friction in terms of like the conceptual way of dealing with it. And as soon as you start to kind of like bucket it into other ways of doing it. It's fine, but there's tradeoffs to be made. So that's why capturing events as they happen is a great way to go about doing it in terms of request response, request response is great in some cases. But a lot of the times you want to actually have an asynchronous approach to things which put in that request response, you start to kind of like block up things waiting for a response to it and something else fails and you kinda like get that knock on effect the dominoes like all the things are blocking, waiting for that one end point to respond. 

So by using Kafka as your broker, you can actually put messages onto a topic and then other services deal with those asynchronously. So you have more loose coupling between your systems, they're still coupled in a sense because they're still working with each other. But you're doing it asynchronously, which in a lot of cases is the right way to do it, but not always. And that's the thing. It's kind of, it always, it depends, it's working out which way is what you actually need in your system. Do you need that direct coupling that deliberate, don't do anything until I've had a response or is it more a case of saying this customer has just placed an order? So I'm gonna say they've placed an order, I put that onto a, a message queue onto a topic and then anyone else who needs to know that they've placed an order, whether it's the inventory service or the shipping service or the fraud detection service, they can subscribe to that, that,, topic and they can find out about that. But then placing an order isn't dependent on like each one of those responding and saying, yeah, I've heard about it. You can actually just put it on to that topic and those services receive that message when they're able to.


David Brown

Yeah, I'm interested in the use cases and architecturally the deployment models for Kafka, including like a pub sub model for like you're talking about there. But first, I'd like to talk about data integration cos in previous writings on your blog, you've talked about data integration built on a traditional static data. So will inevitably end up with a high degree of coupling and poor scalability. How can switching to an event processing model for integration overcome that issue?


Robin Moffatt

So in terms of scalability, I suppose in terms of integration, the way that we've historically built systems is I've got data in this one place and I, I want it in this other place and going back many, many years, it's like, well, that's fine. We had like one great big mainframe and we maybe copy it from one s subsystem to another subsystem and then move on a few years. It's like, well, we've got this one great big central transactional server and another great big central data warehouse and we'll just copy the things between them and that's kind of point to point and that's fine. And then fast forward a few more years. And I, I guess we're talking about 1015 years ago and suddenly the whole thing exploded and suddenly there was numerous different databases to choose from numerous different cloud services to choose from. And people were running software under their desks and it was, it was no longer the purview of like just this elite kind of like data team. It was like anyone who could spin up a server or have a credit card could now start storing data and producing data and wanting to extract data or send data. And so you ended up with this huge, huge spaghetti ball of tightly coupled mess. Like you say, I want to get data from this place to this place. And someone else would say, well, I also want data from this place. So I'm gonna copy it to here, but I can't copy it to here until this feed has run and then that feed breaks and like 10 people start screaming and we only knew about one of them and nine other people like piggyback onto, onto the back of that. 

So the point around using something like Kafka for integration is that when an event happens, it gets published onto a topic, it doesn't get deleted from that topic until the person who created that topic, they've defined how long to keep that data, which could be, we want to keep it based on time. Like let's keep this data here for 10 years or 10 days or whatever's appropriate to that business case or based on size. Let's keep the last like 10 terabytes worth of that particular topic or indeed, let's keep it forever. It depends entirely on the particular piece of data or the entity that you're working with. Anyone else who wants that data can subscribe to that topic and independently read from it. So you can have like very, very like near real time exchange of data between like data gets produced like an order gets written and these other services can read from that and know about it almost instantaneously, you can add other systems, maybe it's like a an audit system or a machine learning model that wants to kind of like get some training data they can read from that, they can hook up to it once a day and say like give me all of the new data. But the point is the data's there on that topic in Kafka for anyone to read who's got permission to access it. So it's a much more loosely coupled way of saying here's some data it got created and now anyone who needs that data can access it but without building these tight couplings together. So it makes it more loosely coupled, it also makes it more scalable because Kafka is a distributed system. So as you have more data in it, more throughput, you add in more and more Kafka brokers and you get more scalability from it and your consuming systems can consume in parallel. So it's much better that way also.


David Brown

Yeah, I mean when I think about event processing engines like Kafka obviously have models like intern of things, devices which are generating lots of events or, or linkedin or whatever it maybe, which is where there's some event occurring on the website where they just need to tr track vast amounts of data as people are doing stuff. But of course, those transactional databases that you're talking about are still incredibly important within an enterprise. So how do you stream events from those large SQL databases into Kafka?


Robin Moffatt

So there's two different approaches. One is the application that wrote the data to that database, writes it to Kafka. So it depends, why are we writing it to a database? Are we writing it to a database? Just because that's what we've always done, we've always always written data to a database. So we'll keep on writing it to a database or actually do we say, well, we didn't actually need it in a database in the first place. We only put it into a database as a way of exchanging it with other systems. In which case you say, well, given, if it's appropriate for the project, let's just change it and write it to Kafka instead, a lot of the time. That's not an option. At least initially, initially, everyone's like, no, we're not changing anything or the scope of this project isn't to actually do that. So you can use something called change data capture, which lets you take data from the database and stream it into anywhere else including Kafka. So many, many different databases support this way of doing it. Like the the details differ, but like oracle's got it's redo log that you can get the data out of the transaction log. There's the bin log in my sequel post grades. All the relational databases have got to take the concept of a transaction log and you can actually capture the events such as an insert, an update, even deletes, you can capture that data out of the database and you can stream it into the places including Kafka.


David Brown

Yeah, great. And, and a lot of people would be familiar with Kafka but I think fewer perhaps would be familiar with that. There is a ksql database DB which is also produced by compliant. Now, it's described as a database for building stream processing applications on top top of Kafka. Tell us what is the purpose of, of ksqlDB. And how does it differ from a traditional sequel database?


Robin Moffatt

Yeah. So ksqlDB is really cool. It's, it's like one of the things that when I joined Confluent or it's just an in it's infancy and it's fantastic to see like how it's grown. It's so it's, it's like a, a database for building it's event driven applications, but KQL DB is also a way of doing stream processing, declaring stream processing using SQL. So when I came into kind of like the I suppose, like the, the big data space and my background is in analytics. And then kind I started reading about Hadoop and things like that. And there was this thing called Spark all this stuff and I felt massively left out because I couldn't write Java, I couldn't write scar or any of that kind of stuff that people use for doing stream processing. And then I started reading about KCQ and got my hands on it. And it was like, this is really cool. I can take a stream of events and I can say I'd like to transform that stream of events into another one. I could filter it and I can aggregate it. I can even join it to another stream. And I can express that using sequel and Sequels like My Bread and Butter cos that's what my background is. 

So that's one of the things that ksql DB, lets you do you create these queries in SQL and the continuous queries. So when you create this query, it actually continues running on the server. If the server stops, when it restarts the, the query keeps on running. So you're continually processing these streams of data. So that's one of the key purposes of KSL. The ebay is that you can build these stream processing applications that you're expressing using SQL. So if you start thinking about things like ETL, in the old days, you would kind of pull some data out and then you transform it and then put it somewhere else or you would pull it out and put it somewhere else and then transform it depending on if you're doing ELT or ETL, you can actually do this concept of streaming ETL by taking data out of the system. So like from a transactional database, like you asked about a minute ago as that data is coming out of the database, you can be transforming it and enriching it and doing the stuff you want to do to it. And then you can store that into a Kafka topic for other systems to use or services to use. You can also then stream it on downstream, using something like Kafka connecting or push it out to another system. One of the really cool things that ksqlDB also does and this is where the D, the DB bit of the name comes from cos it used to be called ksql and then it got renamed ksqlDB because it actually stores data. It builds a state store internally. So ksqlDB itself is built on Apache Kafka. So it reads data from Kafka topics, it writes data to Kafka topics. Apache Kafka is its persistence layer but within it, it has this state star which I think uses rock TV in the background. But this state star actually built, builds up the state. So if you're building an aggregation, like I want to know how many orders we've processed in the last 10 minutes. 

So you can build up this kind of like cumulative aggregate. You actually hold that internally, which is pretty useful cos you have that state for aggregation, it can be scaled out across SQL DB workers and it kind of handles that automatically, but you can actually query that aggregate directly. So you can do what's called a pole query against ksqlDB, either directly or you can use the rest API or the Java clients. So if you kinda like take a step back from it, it means that you can say I've got this data coming in from anywhere, like produce it to Kafka go directly or I pull it from a database stream or I pull it from anywhere else I can run a query which is gonna build up an aggregation saying what was the maximum order value in this time period or how many we process whatever it holds that state, continually updating within SQL DB. And then an external application you can query Kldeb and say, how many orders have we currently processed in this 10 minute window? So you don't need an additional cache or store elsewhere. You've just got your data being created. It's going into a Kafka topic and then keys SQL DB is maintaining that state store on top of it. 


David Brown

Yeah, it's interesting. So as I understand it, ksql is basically an interface to Kafka streams to, to be able to query or set up Kafka streams using a sequel syntax. The DB element, like you're saying, is this persistent data store related to that where you can presumably set up the stream to point to another stream to output it to a persistent data store in a Kafka stream. Is there limits in terms of that time window where you can query that data?


Robin Moffatt

So yeah, so ksqlDB, you write it, it runs as a Kafka streams application. And as a user, you don't need to know any java, which is great because I don't know any. So you just write SQL. And then in terms of like the retention and stuff like that, you can define that when you're creating what's called a table. So within KC called DB, you have the concept of streams. So you can write a select against the stream. You can also build a table. So you say create table as select and you define your aggregate. And that table is backed by a Kafka topic. And you can say within that table, what retention period do I want on that data? And that's gonna come down to the business use case that you're writing the application for. 


David Brown

How about relationships? 


Robin Moffatt

Yep. So you can do joins within that. The, the latest version of KC called Age is 0.19 which dropped I think yesterday now supports foreign key joins also. So yeah, so you can do joins between streams and tables and tables and tables and streams and streams. So you can actually, you can do that as well. 


David Brown: Can you also join out to an ex external SQL database like an oracle or Postgres database?


Robin Moffatt

In a sense because you can integrate those into Kafka. So the, the, the pattern to follow would be to pull that data into a Kafka topic. So I like kind of a canonical example would be I've got order data coming in from like my platform, my website platform. It's writing order data into a Kafka topic and it's got like a foreign key reference out to the customer. So it doesn't have the full record. It's just like nicely normalized. I've got my customer data in a database. So you pull that data from your database into a Kafka topic. And within that Kafka topic, when you take that, you model it as a SQL DB table. So there's this thing called the stream table duality. We can get into it if you want to. But it's basically how you semantically deal with the data. Is it a stream of unbounded stream of events or is it a key value information? So you can take that data from the database that you're integrating into Kafka, there's a continuous stream of changes and a snapshot, you can then join that to that event stream. So you can join to data and external sources, you just make sure you pull that data from the external source into Kafka first. 


David Brown

Yeah, right. You previously alluded to micro services with your pub sub example, with your, you know, order, order example, posting new orders to a message queue and having services subscribing to that queue and, and executing some logic based on that. So can you explain a bit more in depth about how a Kafka streaming engine can facilitate a microservices architecture?


Robin Moffatt

Yeah. So it's this idea of being able to exchange the information within the services. So you've got the kind of like your, your, your model, your bounded model for each microservice. And instead of building around this idea of request response. So your order service sends a request over to the fraud check service and then does nothing until it gets a response from that. It can actually put a message onto a Kafka topic. So you're asynchronously doing these relationships between your different micro services. your fraud check service. In fact, maybe that should be actually a request response cos you maybe don't want the order to proceed until it's been fraud checked. But so it depends on the kind of the business process behind it. Something like an inventory update, perhaps you would simply say the orders service puts out a message saying we've just sold this thing and we've allocated this thing to your inventory service will be subscribing to that topic. It would find out about that it could update its own internal data store. But you start building out your services in that way you Kafka as the broker between them and because the data is retained on Kafka. So this is one of the key differences between Kafka and like message queue solutions that people may be familiar with. Cos Kafka's got element that behaves kind of like a message queue but it's not like a drop in replacement for other ones. It's a broader technology than that because it retains the data, not only can systems that you've initially designed, build around it. 

So you have your orders service puts a message onto a topic and your fraud check service and your inventory service read from that other services can also come along subsequently and also build on that data. So in terms of microservices, one of the key things to consider is also around the scheme is sent for your data and how you're going to manage that and the compatibility between it. Because if you're doing like request response, you've got kind of like the message formats that you're gonna exchange and that's your API that you're gonna support. If you start working asynchronously with messages onto a topic, the schemer of those messages becomes your API so you need to make sure that when you place an order and you say like this order was placed at this time stamp and it goes to this address here or something like that, those fields are understood by the other services to be in a particular format and whether they exist or are they optional and things like that. So that's where something like a scheme registry comes really important because on day one you say, well, I'm writing this proof of concept. I understand what the fallout of this data is. So it's kind of obvious and I'll not bother with any of that. And then on day 100 you get some new developers in and maybe you go on holiday and people start saying, well, I guess this is like orders. So we're just like, we'll just put some data on here, which looks like that and then things start changing and someone changes the time stamp and instead of like doing it as a via trial, they maybe put it as a big inch as an epoch since 1970 things start to break. So you have to have that compatibility guarantees and how you're gonna work with things like schemers which act as the EP I between your different microservices.


David Brown

It's funny because in a lot of this stuff, you sort of think about it as not being important because the system is so flexible. And we had the advocate for Mongo DB on the podcast some weeks ago and he said exactly the same thing about look, guys, you really do need to put some thought into your scheme of development here. And it's interesting you're saying the same thing. It's also interesting, kind of- sorry. Go on.


Robin Moffatt

I was just gonna kinda like completely agree with that point really. It's the flexibility and the freedom and the ease of it, which is so attractive to developers because it's like, I can just pick this thing up and do things, but it's kind of like juggling with knives in a sense. It's like, go for it, be my guess. But if you're gonna do it, make sure you're aware of like the pain that can come along down the line and, and sometimes people just have to learn by doing it the less than brilliant way and just being burnt by and then they're like, ok, next time round we're gonna do this and it's, it, it's always that interesting trade off in how you build systems. Do you go with something that with like tons and tons of guard rails and it's like super restrictive, but you can't make any mistakes which is usually like, really tedious and boring. Or do you say like, here's this completely green field, knock yourselves out but you're on your own and it's kind of getting the right balance between what's gonna be productive but kind of supportable in the long run versus what's just not going to be such a great idea.


David Brown

Look to, to finish off. I'd like to get some thoughts if you have any on, on what the future holds for event streaming. We've had some guests on the program before, which should have had some big grand visions for event streaming. But I, I guess you're more on the ground with developers and what they're, what they're doing and what they're looking for. Do you have any thoughts as to where this industry is headed?


Robin Moffatt

I think it's quite easy to get caught up in like it's things like on Twitter, you've got these echo chambers where, because you follow people who are like, interested in stuff that you are. It's kind of, it's quite easy to sometimes feel that everyone is at the same place. It's like, of course, everyone's building like streaming systems, like of course, everyone's doing like streaming ETL. And of course, and actually in practice, I think people are just starting to catch up with like, I guess Cloud makes sense, running our workloads in because like why run it yourself if you can get someone else to do it for you? And so I think what the near term future and the midterm future holds is people are realizing that that because events model the world around us starting with an events streaming platform like patchy calf girl like confluence for working with your data and whether you're building integration pipelines, whether you're building applications, that's a really, really powerful thing to do because you're not losing any fidelity in your data. 

You're actually working with events, which is how data originates a lot of the time. And from there you can go and build state and stick it in a relation database or a no sequel store or do whatever you already do. But I think that shift in mindset from this is how we've always done things for the last 50 years. And then let's chuck away steamers and call it no sequel or whatever. That's kind of like it's all part of the same way of doing things. But working with events as that fundamental piece on which you then build, I think it's a mind shift, which is starting to happen. But it's still got quite a long way to go. But I truly believe that it's a very, very powerful foundation on which to actually build systems.


David Brown

Yeah. And, and we hear the same thing from our other guests is that, you know, this is we're just at the starting point of this is that whether it be event streaming or microservice or digital transformation or any buzzword that you may like to decide on a lot of these concepts and technologies and architecture, architecture are starting to mature now. And yes, we're starting to understand how they use and how they're deployed. And for the next 10 years, most enterprise are just going to be busy doing it.


Robin Moffatt

Which is good because that's what the whole point of all this technology is. It's fun, but it's also going to develop, develop, deliver the value. It's not just about like shiny boxes.


David Brown

Yeah, Robin, it's been a pleasure having you on the program. How can our audience follow you and keep in touch with what you're writing about?


Robin Moffatt

So, I'm always on Twitter. I'm @rmoff on Twitter. I've got a blog rmoff.net. And you can also check out Confluent Cloud and the Confluent Blog where I write a lot of stuff. 


David Brown

Brilliant. Thanks for joining us on the program today.


Robin Moffatt

Thanks so much for having me.


Kevin Montalbo

All right. That's a wrap for this episode of Coding over cocktails to our listeners. What did you think of this episode? Let us know in the comments section from the podcast platform you're listening to. Also, please visit our website at www.torocloud.com for a transcript of this episode as well as our blogs and our products. We're also on social media, Facebook, LinkedIn, YouTube, Twitter, and Instagram, talk to us there because we listen, just look for Toro Cloud on behalf of the team here at Toro Cloud. Thank you very much for listening to us today. This has been Kevin Montalbo for coding over cocktails. Cheers.


Listen on your favourite platform


Other podcasts you might like