is now [learn more]
PODCAST

Designing Data Intensive Applications with Martin Kleppmann

Transcript

Kevin Montalbo


How do we design data intensive applications in this round of cocktails? We talked to the co-founder of supportive and author of the critically acclaimed book designing data intensive applications. We delve into the benefits of local first software, a project which aims to enable both software collaboration and ownership with the ability for users to work offline while also improving the security privacy, long term preservation and user control of data.


Kevin Montalbo


Welcome to episode 43 of the Coding Over cocktails podcast. My name is Kevin Montalbo. Joining me from Sydney Australia is Total Cloud CEO and founder David Brown. Hi, David.


David Brown


Hi, Kevin.


Kevin Montalbo


All right. And our guest for today is a researcher in distributed systems at the University of Cambridge. Previously, he was the co-founder of report which was acquired by linkedin in 2012. He's also the author of designing data intensive applications described by the Chief technology Officer of Microsoft as a required reading for software engineers. He's a regular conference speaker, blogger and open source contributor. He believes that profound technical ideas should be accessible to everyone and that deeper understanding will help us develop better software sharing with us. That deep understanding of software development in this episode is Martin Kleppmann. Hi, Martin. We're glad to have you on the show. Hello,


Martin Kleppman


thank you very much for having me and thank you for that very kind in.


Kevin Montalbo


All right. So before we delve into the more technical stuff, can you please tell us about your role as senior research associate and affiliated lecturer at the University of Cambridge?


Martin Kleppman


Yeah, it's um it's a role I sort of fell into by accident a little bit. Um So it's a, I do a mixture of research and teaching research is the primary focus. Um which means you know, I'm spending a lot of time thinking through algorithms or trying to write some code trying to make it better and then writing those things up in the form of research papers and trying to get those published and I work with various collaborators. Um and I can tell you a little bit more shortly about the, the sort of topics that we work on. Um But before then I spent a little while full time writing my book, the book that you mentioned. And then before that I was a software engineer in industry. So I did the whole Silicon Valley internet companies thing. We started a start up that we moved to San Francisco and um were part of that exciting ecosystem there for a while.


David Brown


How do you make that transition from having those successful start ups to a life in academia? Why would you, and, and what made you make that choice in transition?


Martin Kleppman


Yeah, it's, it's a slightly unusual thing to do. Um For me, I think it was the right thing. So I, I enjoyed the start up time in terms of like getting practical hands on experience of of building systems. But after a while, I also got a bit frustrated that it was all very short term. Like you're always just thinking like one week ahead. OK, what is the next thing we need to ship? What's our next sprint? You're always,, you're fighting files. Um You're, you're always um you know, very close to, to the, the, the next thing you're building. Whereas really what I was hoping for was something where I could think a bit longer term, have the luxury to, to actually try and attack problems that are hard and which will take some time to solve properly, but which will be valuable if we can solve them And so so, yes, when I, I left Linkedin then in 2014 or so, and took a, first of all, took a year out as a sabbatical to work full time on my book. And during that time, I, I spent a lot of time reading mostly background reading as research, background research for the book. And that sort of drew me into research a bit because I was, you know, I was doing one aspect of research at least, which is understanding the literature of what, what has already been said before. Um And I had these ideas for um for technologies that I thought would help help users gain better ownership over the data over the data that they create. And this is this ideas and it wasn't quite well formulated at a time. Um But, you know, I had this feeling that that cloud software was a bit of a dead end, you know, in a way, you know, cloud software has enabled so many wonderful things like with Google Docs, we can real time collaborate on a document. We don't have to send it back and forth as a word document as email attachment anymore. You can just have everyone log in and edit at the same time and it makes things so much more convenient. But at the same time, there's this risk that if Google decides to lock your account, then you're locked out of Google Docs and you're locked out of every document that you've ever created on Google docs. And so all of your data is he held hostage by Google in this case, essentially. And there's a huge risk that, you know, they might one day simply decide that some automated system decides you violated the terms of service. This happens all the time. Apparently, millions of Google accounts get, get closed every year just on the basis of some automated system design decided that you violated the terms of service. And then that's it. You have no more access to anything you ever created in your Google account with no warning and, and no recourse and, and, and I thought that's, that's really terrible. And that wanting to solve that problem is part of what got me into research. So then I started looking at algorithms and techniques that would allow us to build collaboration software in the sa which behaves the same as Google Docs has the same kind of convenience. You can have several people editing in real time and so on. But also which makes sure that every user has a copy of the data on their own computer where nobody can take it away from them if it's a file on your own computer, you know, that's, that's something much more concrete, much more tangible, much safer than


David Brown


just the name implies that the, the the file first and foremost exists on your local drive and then it's being backed up to cloud storage. So that I guess that's we, we still have, you know, that sort of concept in a lot of applications today. So how is your concept of as opposed to just a cloud backup? How does your concept differ from that?


Martin Kleppman


Yes. Well, so nowadays say you can have a file in Google doc in in Dropbox say or in Google drive. Um So Dropbox or Google Drive doesn't really look inside that file. It just treats it as a sequence of bytes and and you can have a word document or a mark down document or any other file format you want in there. And as long as you are the only person who's editing it, then everything is fine. Life is simple. The problem starts arising when you've got several people contributing to the file. And what happens? For example, if I modified the file and independently, you also modify the file and now it both saved the file and what happens. And in the case of Dropbox, for example, what you get is a file conflict. Dropbox will detect the file that was modified by two different users at the same time. So it will give you two copies of the file, one containing your change and one containing your colleagues changes. And now it's up to you to manually merge those two things back together again. So, so good luck. Maybe if you're lucky, your your software provides some kind of diffing view. Which allows you to compare the two files. Otherwise it's going to be an extremely manual process um very labor intensive. And so that problem of having to do mergers manually, we don't have that in, in Google Docs because Google Docs is constantly merging all of the users changes automatically. And so we are, we want to take that same concept of automatic merging the file versions. Um a similar thing kind of happens in GIT. So in GIT, you know, each user can work off on their own branch, you can make a commit. Even if you're not connected to the internet right now, you can just do that offline on your computer. You can make as many commits as you like. And then at some point you decide, OK, I'm ready to share my work. I'm going to push it to get out now for example, or and make a pull request and then the other people can decide to merge that in. again, we've got this kind of branching and merging type behavior here and again with GIT. Well, the merging can happen automatically. Like if you're editing two different files in the in two different purple requests, then git will very happily merge those. If you're editing different parts of the same file. So I edit the top of the file, you edit the bottom of the file, then it'll probably still be able to merge those automatically. If you edit the same lines or very close file lines in the same file, then GIT will give you a merge conflict and leave it to yourself to resolve. But all of this merging and merge conflict detection works only if one GIT is working with plain text files like source code files. If you put any other file format into GITS, like anything that GIT would call a binary file, it does no automatic merging because it doesn't understand the file format. And so again, you're back to this situation of having to merge files manually and you know, doing things as plain text is fine for software engineers. But people in the real life work on spreadsheets say spreadsheets are not plain text or they work on C A drawing or the building plans, architectural building plans for a building or the score for a movie or those types of things. And, and you know, you can't really well represent those things as, as plain text formats. Generally, there are going to be some sort of binary formats produced by some higher level software. And so where we're trying to get to with local first software is that all these different types of software that produce all these different file types can continue storing their data in files on the local disk. But when several people independently modify their files of those, their copies of those files, we can merge those together. And moreover, if several people want to work in real time together, then we can also enable that sort of real time collaboration, which is something you don't really get like that, you know, the really character by character, see it. What if somebody else is typing real time collaboration? And the cool thing is that we can actually do all of those things using just one programming model. So we have one technique which is called CRDTS, which I can explain a bit more about if you're interested. And and that allows us to do all these nice things like real time collaboration. But while storing the file on your local disk, it allows us to do asynchronous collaboration, which is the sort of Google docs sorry, the GIT style pull request type workflows. But with automatic merging, um it can allow us it can allow users to work offline on their document and then merge with other users when they come back online again, sometime later. You can even allow things like having several people in a remote location, collaborating with each other over a local network. But with while they're disconnected from the wider internet, so just using device to device communication like Bluetooth that it's also sufficient you can so that they don't have to necessarily be any servers involved in in this type of software at all. And I find it really cool because it just enables so many new types of workflows and new types of applications and models for collaboration that current cloud software does not have. And at the same time, it also is better for users because it reduces the risk of say a cloud vendor going out of business and then taking all of the data away with them.


David Brown


So how mature is the system, the protocol to facilitate this? Is it, is this something which is readily deployable today or is it still in a research phase? Where are you at?


Martin Kleppman


Um It's, it's both I would say. So there are some people putting it into production right now. Um I should say also there, there are a number of implementations of these ideas. So there are a few CRDT libraries. IC rdts are really like the foundational technology to enable this kind of automatic merging of different document versions. Um The library that I work on is called auto emerge. And it's, it's just an open source library which you can find on github. It's currently got a javascript implementation and also a Rust implementation. Um We're thinking of using the Rust implementation donut. We're currently moving to using that as the primary one. And so then you can combine all the Rust to web assembly and still use it from javascript. You can also compile it native code and use it in mobile apps, for example, based with a, a wrapper in swift or a wrapper in Cochin or wrapper in Python or whatever languages people are using. Um


David Brown


you mentioned it's an open source project. Are there commercial applications for yourself or the university with this?


Martin Kleppman


So at the moment there, there's no commercial interest behind it. Um So it's, we are as researchers are maintaining it as, as part of our research activities. We're not trying to make any commercial products out of it. Maybe one day the time will be right to try and commercialize it. Um But I don't think we're quite there yet. So it's, I I'll be honest with you, it's still a fairly early stage technology. It, it works like we have a, a good test suite and it's pretty robust. It's a bit slow at the moment and it uses quite a lot of memory. So that's one of my main focus areas at the moment has just to improve the performance. There, we, we have a long way to go, but also we have some very promising approaches that we're trying. Um So whether it's fast or not, not fast enough or not right now, it depends a bit on the application. So


David Brown


where do you see the best applications for it? Is it, is it in that data privacy security type space or is it in certain verticals like you mentioned C ad? For example, I can also imagine like, you know, when you're working on large Photoshop files and working in the cloud is not necessarily ideal some is there any sort of markets you see as a natural fit for it.


Martin Kleppman


It's, it is quite broad, but of course, we do need to start somewhere. Um So one of the first production use cases that auto merge currently has, which I find quite interesting is with the Washington Post, the newspaper. And so they put auto merge into production in their internal tooling for updating the website. So, so their main website at Washington post.com is if you look at it, it's like a several columns in each column, there are articles, each article may have an image or may not, it'll have a headline with varying font size with varying text and maybe text under underneath the headline might be extra stuff. They might move the layout around or rejig it from time to time to based on what's happening in the news. And all of this layout is set up manually by editor by editors and there's a team of editors working around the clock at the New York Times that whenever some important news comes in, they will figure out where to slot it in on the home page, what old news to take out and so on. And for this, they have their own in house piece of software that allows them to edit this. And they have, since they have several editors working on the home page, at the same time, they need a collaboration workflow. And moreover, they don't just want like one editor to do a click make a change and it's immediately live on the on the live website. Instead they have a review workflow where one editor can essentially accumulate some changes that they want to make on what you would call a private branch in Git. So they are kind of operating on their own private copy of the home page and they can drag things around, see what it would look like. Once they're happy with it, they'll click a button to request the review from a colleague, the colleague will then see what this person has done, will also see what people have done to other sections of the website. Like when people might be working on the new section, the other people might be working on the, on the sports section. And so we want to merge those edits together automatically. And at some point they decide, OK. Right. We're happy with the layout now and they hit the publish button and it goes out to the live website.


David Brown


That's really nice.


Martin Kleppman


Yeah, it's, it's really nice. And what I find interesting about it is that it's, you know, it has a quite a real time collaboration element, but it also has this element of like different users working in their own private copies for a while until they're ready to share their work. And then at the point where they're ready to share, they hit the button and it becomes part of a shared document and and using Autom mege allowed them to seamlessly combine those worlds because because auto merge is perfectly happy for you to have different branches and forks of a document and for different people to have different views for a while and then to reconcile those views. when you're ready to reconcile them.


David Brown


Very cool. Let's talk about your book, The Designing Data Intensive Applications published in 2017 and four years on the book's still going well, leaving positive reviews on Amazon. It's obviously maintained its relevancy over the last four years. What do you think about? it is about the book that is made to sustain its relevancy today.


Martin Kleppman


Well, I, I was very clear when I was writing it that I wanted to focus on the fundamentals rather than on the latest fads of technology and, and although like people say, tech is so fast changing, you know, it's, it's constantly changing one day to another. There's a new javascript framework around the corner every six months. Um I found that actually the fundamentals change surprisingly slowly and a lot of the fundamentals of databases we're using now, for example, is still anchored in the 19 seventies. Um And some things of like are really shockingly similar to what were, what was done in 1975 or so, even though the underlying hardware has changed a lot. And so what I tried to do in this book is to give people a framework for figuring out which technologies they should be using for their particular project. Because you know, there are so many different databases and data storage technologies and processing technologies and so on there. There's, there's a bunch of commercial projects, there's a bunch of open source projects. Everyone claims that they're the best at everything. Obviously, that can't be true because nobody is always the best at everything. You always have. Each project always has its strengths and weaknesses, but a lot of projects are not very good at articulating what their strengths and what their weaknesses are. And so what I wanted to try with this book is to really figure out OK, what are the fundamentals like essentially like if you want to store data, there might be three different primary ways how you can do it. There's approach A, there's approach B, there's approach C and then we can say, OK, let's categorize the products that exist, OK? Databases, XY and Z store data according to approach A databases, GEF and H store data according to approach B and so on. And so this now kind of helps people build up a bit of a mental map of the landscape. And so in that way, helping figure out like roughly at least what set of products should you be looking at? If you need a, if you have a system that, for example, either needs to store larges or batches of data quickly and then be able to query over them all or have a system where data comes in only slowly, but then gets queried many, many times or data where you've got, they take new rights coming in and at a fast rate, but they actually don't get queried that often and so on. Like depending on what your access patterns are for a system and what your consistency requirements are and so on. There are ways of figuring out which tools are, are better for the job and which are less good at the job. And I think that's part of what has made this, this book useful to people is like, I I don't try to teach people how to use a particular product because there's plenty of documentation out there. If like if you want to learn all the features of post grades, that's fine. Just read the post grades documentation. It's perfect. Um What I will try to do is to help you to figure out in which circumstances would you use postgres versus which circumstances? Some totally different database system?


David Brown


Yeah. Where did your passion for data come from? Was it your time with your start ups? Was reported or you when you went to with to linkedin and the massive data sets with streaming services? And the like, where did all that come from?


Martin Kleppman


Yeah, I think um certainly like when we were report, if we were, we're dealing with a moderately large data set at the time and we did struggle with it a bit like we had a, we had essentially just one big database that we tried to put everything in and trying to get the performance of that database to be. what we wanted was always a bit of a challenge. Um So then I started learning a bit more about techniques for scalability um that would allow us to, to grow that further and still do the kind of operations on that database that we needed to. And then when I got to linkedin, um I started working on their stream processing efforts. So this was just in the early days of Apache Kafka. So Kafka had like just been made open source. Um But this was before confluence run out of linkedin and started to commercialize C FA. And we were just like in this exciting a time of trying to figure out how do you best use these tools like OK, we've, we've got this, the streaming log abstraction provided by C FA. What sort of processing primitives can we provide on top of it? How do we make them scalable and reliable? How do we make it such that linkedin was operating a pretty large data volumes for these things? So we want to be efficient and we want to make sure that we can just set up a job and have it run reliably without getting paged in the middle of the night and so on. So, so there's a lot of motivation comes from those sort of personal experiences of, of trying to build systems and then, and then later trying to learn the lessons from building those systems.


David Brown


We've described data intensive applications as that they should be reliable, scalable and maintainable. So what approaches can people take to achieve this?


Martin Kleppman


Um Well, it's hard to give a very short answer because essentially the, the book is of a very long winded 700 page answer to that question. Um But


David Brown


are there, are there some basic principles that they should be looking at a


Martin Kleppman


a as a basic principle? I would try to um be very conscious of exactly the operations that are happening and how often they're happening and how they can best be enabled. And so, um so like for scalability, um you know, scalability is not a one dimensional product, it pro property, it doesn't make sense to say a system is scalable or non scalable without saying what it's re it's scalable with respect to what like generally scalability means like you can increase something that something might be the amount of data that it stores or the number of queries it handles per second or the number of distinct customers who are using it or the number of concurrent users using it at any one time or any of these various metrics of how busy the system is. And as that metric grows, you want the system as a whole to still provide reasonable performance. And then performance again, is not a single property. But you could be measuring like, is it the latency of a request until a request gets a successful response? Is it the throughput in terms of, of like gigabytes per second? What, what is your metric of performance that you're trying to optimize here? And so, so I think the the whole domain of scalability essentially is is wanting to say, OK, if, if I increase the load in a certain way where load is defined in some way that makes sense for my application, then I want the performance to still remain good where performance is defined in some way that makes sense for my application. And once you've broken it down like that, I think then you have a degree of clarity and then you can say, OK, what we're trying to do is just to store the maximum amount of data possible. And we're not going to worry about how it's going to get queried or we're going to make sure that we make our queries really fast. And so we need to make sure that our, our scalability is in the query layer and so on. So, so I think that's, that's how I would approach this really because um this, the questions are well like the, the, the concrete steps that you would take to make an application scalable, depend massively on what the application is and what it needs. But the steps that you can take in order to figure out how to do that, they, they are repeatable. So the types of questions that you need to ask yourself, and those are the sort of questions that the book tries to teach you to ask.


David Brown


And of course, you don't just talk about systems and architecture. You also talk about data models, which you've described as one of the most important parts of developing software, run us through the importance of data models and your thought process behind those.


Martin Kleppman


Yeah, when, when people compare data systems, often data models are like the first thing they, they focus on because it's sort of the, it's the most big like it is just the thing upfront really. Um So for example, when say there was a phase in 10 years ago or so when Mongo DB came out and there were a bunch of other document databases um that presented themselves as alternatives to the relational model. And they were saying, OK, like it's much nicer to group your data together into these tracing documents rather than having it spread out across a bunch of rows in a relational database. Um And this is a data model question, right? And, and then like people looked at that and they said, yeah, OK. They like, they, they have some points there. But actually then over time what we've seen is that these two different data models have converged somewhat. Um And so a lot of databases now actually have pretty good Jason support, best person and my QL included. So, so actually the the need for a dedicated type of database to to handle the sort of document model data is not as pressing anymore now, because other databases can actually do that conversely in the other direction, some of the document databases have then started adopting relational style query languages because they realized that that is actually a really useful feature as well. So, so like for a while, there was this phase where people said like relational and document oriented are like these NMES that they're, they're total opposites of each other. And then it turned out that actually the two just merged and, and and more and more, we don't even think of them as two different things, but just two different aspects of a data model that may well be implemented in the same system. And it goes, we can apply similar arguments with other types of data models as well. So like a graph data model is is another one that I quite like, I, I'm personally quite a fan of graphs because I find them a very flexible way of describing data like relationships between things. Um in particular graphs tend to be very extensible. So if you want to add a new property to something or a new type of relationship between different entities. It's very easy to do that. Um But how do you represent a graph? Well, you can represent a graph on top of a relational database. For example, that's, that's perfectly fine. You don't have to necessarily need to have a, a specialist graph database, specialist graph database might be able to do some things faster than a, than a relational database. Like if you want to do some shortest path queries, for example, or, or other kind of queries that depend on variable length paths through, through a data set. Those are things that, that SQL databases don't currently support very well, but they do kind of support as well. Um And so there again, II I feel like OK, we've got this graph data model which is, which is a usual interesting contrast to the relational model. But at the same time, there's also a bit of convergence going on where, where database is essentially stealing the the best ideas from other data models and incorporating that. And


David Brown


data models have been around for a while, but they don't seem to have sort of gone mainstream like they still seem to be have a certain segment of the market as opposed to you know SQL data models which obviously dominate. And now you mentioned no SQL type JSON data models which obviously come big in the last 1015 years. What, what, what is it about? the graph data model which hasn't seen sort of the same type of adoption.


Martin Kleppman


I'm not sure really. Um because my feeling is that it's actually a really good fit for, for a large class of applications. And certainly a lot of the like the big companies um that, that publish about they are the way they structure, that data have adopted graph data structures. Like Facebook, for example, is quite vocal about the fact that like everything they have everything they store is essentially a graph. And so, you know, when you type an update or if you like an update written by somebody else that like is an edge in a graph between yourself and the update that you liked. And the, the update that you liked has an edge in the graph to the person who wrote it and also to the three other people who attacked in that update. And then from there you have an edge to, to the picture that's included in the update, which then indeed has a a link to the the vertex representing the location where the picture was taken and so on. And you, you know, it's, it, this stuff fits beautifully easily into a graph. And and because it's a graph, Facebook can add new types of entity in into the system quite easily and, and and maintain all of all of this, this sort of rich interaction information. And my sense is that a lot of enterprise apps could, could really take a similar approach there. Um


David Brown


We, in the final chapter of your book, you, you dedicated chapters to the future of data systems. Um Are we executing on that future? Do you think or has has your vision for the future changed?


Martin Kleppman


There are? Yeah, so I I explore a whole bunch of um more speculative ideas in, in that chapter and some aspects are definitely happening. So, so what I, what I was trying to think through there is what does the world like look like in which streaming data flows become more the center of, of how we design systems. And the reason I was thinking about that is if you think about a typical database query, um I want to know um how many socks are in stock right now of a particular color. So I make a query to the database and I get back. OK. There are currently five pairs of socks in stock and then what happens if that changes? Well, the, the database doesn't tell me if that number changes, if somebody buys two pairs and now there are only three pairs left in stock. The only way I can find that out is by repeating my query and then I'll find out the new result, but there's no way that the system can notify me as, hey, you earlier queried about the socks. But you know that the stock level for stocks has now changed. You might want to, you might want to know about that. And so this this queries, database queries are stuck, still stuck in this very request response type model. And likewise, most of the A PS that we use now save rest A PS for microservices have that exact same request response model where you make a request to service, you get a response back. But then if the response subsequently becomes outdated, there's no way of finding out other than polling, keep pulling polling again to see if something has changed. Polling is super inefficient. So really, we would like some way of getting notified when stuff changes. Um and that notification really needs to go through all of the layers of just stack all the way up to the mobile app or the web browser the user is using because why would you want stale data being displayed on somebody's sympathy screen, right? If you have the ability to update in real time, what what's some information, some information it came from a database, it went through various levels of being rendered and being bus going through business processes and stuff. Eventually, it ended up in html on somebody's screen. And really if that information goes out of date, it would be nice to be able to push and update all the way up to the user screen to reflect the the change that has happened. And very few systems are currently set up in the way to really allow those changes in data to be propagated through all of the layers of the stack. Like you get streaming systems now built in a few narrow niches. So one thing that is becoming quite popular is something called change, data capture. Where if you have a database, you don't just like write your data to the database and read traumatic and like usual. But you also capture a stream of all of the changes, all of the updates that are written to the database. And that stream can then be put in something like C fa where you can subscribe it and you can have a bunch of consumers that decide what to do with that information then and maybe they will update a cache or maybe they will update a search index or maybe they will do some analytics or maybe they will notify something else that, that some data has changed whatever it is. At least there's now the ability to respond to changes, written to a database. But this is still quite a far way away from this like bigger idea of OK, we don't just capture the changes from the database. Next step is now we push it through all of the layers of the stack which are currently just probably rest API S or other kinds of R PC, which don't really support a streaming type data flow. Um Can we, you know can you take something out of the book of these real time collaboration apps that we were talking about earlier, such as Google Docs, Google Docs has the ability to update in real time on somebody else's screen when something changes in the underlying document, why don't we have that sort of capability for? Absolutely all software, all software can update immediately live on the screen when something changes in the underlying data that's going to be hard to get to. Because so much of our software stack is currently based on this request response paradigm and changing data is going to be a very big job. So I don't expect this thing to be fully realized in even the next 10 years, I think because it's, it's just a bit too much of a jump for people. But I do think it's a very interesting idea to pursue and, and, you know, maybe bits of it will be put into practice and, and at least if it inspires people to think a little bit differently about their systems, then maybe it will still have some effect.


David Brown


Martin Kleman super, super interesting stuff. You're working on some very interesting things and have very interesting ideas. How, how can our listeners follow you and hear what you're writing about and talking about?


Martin Kleppman


Um Well, I, I have a Twitter at Martin KL which you're welcome to follow. If you like, I occasionally write blogs only like a few a few blows to gear, but I try to go into some detail when I do write something. So, on my blog martin.kleman.com, you can also find an email, sign up form that you get a little email when I write a new post. And finally, like if you're interested in supporting this kind of thing financially, so I did set up a Patreon account with the goal of trying to turn this into a sort of um potential career of and you know, independent researcher, not tied to any institution necessarily, but just being able to continue doing the research and the teaching work that I do perhaps writing new books. And the second edition of my current book is, is potentially in the works. Um So those types of things, um if you're interested in that and have a bit of money to spare, you're very welcome to Chip in and I I send detailed updates to my supporters every month on what the latest work is that has been happening. So it's also a way for you to kind of get a front row seat in in the research process and see how these kind of things happen internally and you know how the sausage gets made. So if you find that sort of thing interesting, then then you might find the Patreon interesting


David Brown


good stuff and of course, designing data intensive applications available on amazon.com as well, Martin Klep. Thank you very much for joining us today and wish you well on those future projects. Great. Thank you for having me. Thank you.


Kevin Montalbo


All right. That's a wrap for this episode of Coding over cocktails to our listeners. What did you think of this episode? Let us know in the comments section from the podcast platform you're listening to. Also, please visit our website at triple W dot Torocloud.com for a transcript of this episode as well as our blogs and our products. We're also on social media, Facebook, linkedin, youtube, Twitter and Instagram. Talk to us there because we listen, just look for Toro Cloud on behalf of the team here at Toro Cloud. Thank you very much for listening to us today. This has been Kevin Montalbo for coding over cocktails. Cheers.


Listen on your favourite platform


Other podcasts you might like