Ant Wilson of Supabase discusses building an open source alternative to Firebase with PostgreSQL. SE Radio host Jeremy Jung spoke with Wilson about how Supabase compares to Firebase, building an API layer with postgREST, authentication using GoTrue, row-level security, forking open source projects, using the write ahead log to implement real time updates, provisioning and monitoring databases, user support, incidents, and open source licenses.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.
Jeremy Jung 00:00:22 This is Jeremy Jung for Software Engineering Radio. Today I’m talking to Ant Wilson, the cofounder and CTO of Supabase. Ant, welcome to Software Engineering Radio.
Ant Wilson 00:00:32 Thanks so much. Great to be here.
Jeremy Jung 00:00:35 When I hear about Supabase, I always hear about it in relation to two other products. The first is Postgres, which is an Open Source relational database. We’ve got four shows on it that our audience can check out. And second is Firebase, which is a back-end as a service product from Google Cloud that provides a NoSQL data store. It provides authentication and authorization. It has a function as a service component. So, it’s really meant to be a replacement for you needing to have your own server, create your own back end. You can have that all be done from Firebase. I think a good place for us to start would be walking us through what Supabase is and how it relates to those two products.
Ant Wilson 00:01:25 Yeah, so we brand ourselves as the Open Source Firebase alternative. That came primarily from the fact that we ourselves used it as the alternative to Firebase. So my co-founder Paul, in his previous startup, was using FireStore, and as they started to scale, they hit certain limitations — technical scaling limitations — and he’d always been a huge Postgres fan. So he swapped it out for Postgres and then just started plugging in the bits that were missing, like the real-time streams, and he used a tool called PostgREST with a T for the crud APIs. And so he just built the Open Source Firebase alternative on PostgREST, and that’s kind of where the tagline came from. But the main difference obviously is that it’s a relational database and not a NoSQL database, which means that it’s not actually a drop-in replacement, but it does mean that it kind of opens the door to a lot more functionality actually, which is hopefully an advantage for us.
Jeremy Jung 00:02:27 And it’s a hosted form of Postgres. So, you mentioned that Firebase is different. It’s a NoSQL, people are putting in their JSON objects and things like that. So when people are working with Supabase is the experience of, is it just I’m connecting to a Postgres database, I’m writing SQL. And in that regard, it’s kind of not really similar to Firebase at all. Is that kind of right?
Ant Wilson 00:02:53 Yeah. I mean, the other important thing to notice is that you can communicate with Supabase directly from the client, which is what people love about Firebase is you just like put the credentials in the client, you write some security rules and then you just start sending your data. Obviously, with Supabase, you do need to create your schema because it’s relational. But apart from that, the experience of client-side development is very much the same or very similar. The interface, obviously the API is a little bit different, but it’s similar in that regard. But I think, like I said, we are just a database company actually. And the tagline just explained really well, kind of the concept of what it is: like, a back end as a service. It has the real time streams. It has the OT layer. It has the also generated APIs. So, I don’t know how long we’ll stick with the tagline. I think we’ll probably outgrow it at some point, but it does do a good job of communicating roughly what the service is.
Jeremy Jung 00:03:53 So when we talk about it being similar to Firebase, the part that’s similar to Firebase is that you could be a person building the front end part of the website, and you don’t need to necessarily have a back-end application because all of that could talk to Supabase, and Supabase can handle the authentication, the real-time notifications, all those sorts of things, similar to Firebase, where basically you only need to write the front-end part and then you have to know how to set up Supabase in this case.
Ant Wilson 00:04:27 Yeah, exactly. And some of the other — we love Firebase by the way — we’re not building an alternative to try and destroy it. It’s kind of like, we’re just building the SQL alternative and we take a lot of inspiration from it. And the other thing we love is that you can administer your database from the browser. So you go into Firebase and you can see the object tree, and when you’re in development, you can edit some of the documents in real time. And so we took that experience and effectively built like a spreadsheet view inside of our dashboard. And also obviously have a SQL editor in there as well, and trying to create a similar developer experience because that’s where Firebase just excels, is the DX is incredible. And so we take a lot of inspiration from it in those respects as well.
Jeremy Jung 00:05:15 And to make it clear to our listeners, as well, when you talk about this interface that’s kind of like a spreadsheet and things like that, I suppose it’s similar to somebody opening up PgAdmin, I suppose, and going in and editing the rows, but maybe you’ve got like another layer on top that just makes it a little more user friendly, a little bit more like something you would get from Firebase, I guess.
Ant Wilson 00:05:39 Yeah. And we take a lot of inspiration from PgAdmin. PgAdmin is also Open Source, so I think we’ve contributed a few things in, or trying to upstream a few things into PgAdmin. The other thing that we took a lot of inspiration from, for the table editor, what we call it is Airtable. And because Airtable is effectively a relational database that you can just come in and, you know, click to add your columns, click to add a new table. And so we just want to reproduce that experience, but again, backed up by a full Postgres dedicated database.
Jeremy Jung 00:06:14 So when you’re working with a Postgres database, normally you need some kind of layer in front of it, right? That the person can’t open up their website and connect directly to Postgres from their browser. And you mentioned PostgREST before. I wonder if you could explain a little bit about what that is and how it works.
Ant Wilson 00:06:34 Yeah, definitely. So yeah, PostgREST has been around for a while. It’s basically a server that you connect to your Postgres database and it introspects your schemers and generates an API for you based on, you know, the table names, the column names. And then you can basically then communicate with your Postgres database via this restful API. So you can do pretty much, most of the filtering operations that you can do in SQL quality filters. You can even do full text search over the API. So it just means that whenever you obviously add a new table or a new schemer or a new column, the API just updates instantly. So you don’t have to worry about writing that middle layer, which was always the drag, right? Whenever you start a new project, it’s like, okay, I’ve got my schema, I’ve got my clients. Now I have to do all the connecting code in the middle, which is kind of no developer should need to write that layer in 2022.
Jeremy Jung 00:07:36 So this the layer you’re referring to when I think of a traditional web application, I think of having to write routes, controllers and create this sort of structure where I know all the tables in my database, but the controllers I create may not map one to one with those tables. And so you mentioned a little bit about how PostgREST looks at the schema and starts to build an API automatically. And I wonder if we could explain a little bit about how it does those mappings or if you’re writing those yourself.
Ant Wilson 00:08:10 Yeah. It basically does them automatically by default, it will, you know, map every table, every column when you want to start restricting things. Well, there’s two parts to this. There’s one thing which I’m sure we’ll get into, which is how is this secure since you are communicating direct from the client. But the other part is what you mentioned giving like a reduced view of a particular bit of data. And for that, we just use Postgres views. So you define a view which might be, you know, it might have joins across a couple of different tables, or it might just be a limited set of columns on one of your tables. And then you can choose to just expose that view.
Jeremy Jung 00:08:51 So it sounds like when you would typically create a controller and create a route, instead you create a view within your Postgres database and then PostgREST can take that view and create an endpoint for it, map it to that.
Ant Wilson 00:09:06 Yeah, exactly.
Jeremy Jung 00:09:08 And PostgREST is an Open Source project. Right. I wonder if you could talk a little bit about sort of what its history was, how did you come to choose it?
Ant Wilson 00:09:18 Yeah, I think Paul probably read about it on Hacker News at some point. Anytime it appears on Hacker News, it just gets voted to the front page because it’s so awesome. And we got connected to the maintainer, Steve Chavez at some point, I think he just took an interest in, or we took an interest in Postgres and we kind of got acquainted. And then we found out that, you know, Steve was open to work and this kind of like probably shaped a lot of the way we think about building out Supabase as a project and as a company in that we then decided to employ Steve full time, but just to work on PostgREST because it’s obviously a huge benefit for us. We are very reliant on it. We want it to succeed because it helps our business. And then as we started to add the other components, we decided that we would then always look for existing tools, existing Open Source projects that exist before we decided to build something from scratch. So as we’re starting to try and replicate the features of Firebase, we would, and, or there’s a great example. We did a full audit of what are all the authorization and authentication, Open Source tools that are out there and which one was, if any, would fit best. And we found a, Netlify built a library called GoTrue written in GO, which did pretty much exactly what we needed. So we just adopted that. And now obviously we just have a lot of people on the team contributing to GoTrue as well.
Jeremy Jung 00:10:47 You touched on this a little bit earlier. Normally when you connect to a Postgres database, your user has permission to basically everything I guess, by default anyways. And so how does that work when you want to restrict people’s permissions, make sure they only get to see records they’re allowed to see, how is that all configured in PostgREST and what’s happening, you know, behind the scenes.
Ant Wilson 00:11:11 Yeah. The great thing about PostgREST is it’s got this concept of role level security, which actually, I don’t think I even rarely looked at until we were building out this OT feature where the security rules live in your database as SQL. So you do like a create policy query and you say, anytime someone tries to select or insert or update, apply this policy. And then how it all fits together is our server GoTrue. Someone will basically make a request to sign in or sign up with email and password. And we create that user inside the database. They get issued a UUID and they get issued a Json Web Token, a JWT, which when they have it on the client side, proves that they are this UUID that have access to this data. Then when they make a request via PostgREST, they send the JWT in the authorization header.
Ant Wilson 00:12:10 Then PostgREST will pull out that JWT, check the sub claim, which is the UUID. And compare it to any rows in the database, according to the policy that you wrote. So, the most basic one is you say, in order to access this row, it must have a column UUID and it must match whatever is in the JWT. So, we basically push the authorization down into the database, which actually has, a lot of other benefits and that as you write new clients, you don’t need to have it live on an API layer or on the client. It’s kind of just, everything is managed from the database.
Jeremy Jung 00:12:49 So the UUID, you mentioned that represents the user, correct?
Ant Wilson 00:12:54 Yeah.
Jeremy Jung 00:12:55 And then does that map to a user in PostgREST or is there some other way that you’re mapping itís permissions?
Ant Wilson 00:13:03 Yeah. So when you connect GoTrue, which is the OT server to your Postgres database for the first time, it installs its own schema. So you’ll have an OT schema and inside will be an OT that uses with a list of the users, it’ll have OT dot tokens which will store all the access tokens that it’s issued. And one of the columns on OT dot users table will be UUID. Then whenever you write application specific schemers, you can just join and do a foreign key relation to the OT dot userís table. So it all gets into schema design and hopefully we do a good job of having some good education content in the docs as well. Because one of the things we struggled with from the start was how much do we abstract away from SQL away from Postgres and how much do we educate? And we actually landed on the educate side because I mean, once you start landed about Postgres, it becomes kind of a superpower for you as a developer. And so we’d much rather have people discover us because we’re a Firebased alternative front end Devs. And then we help them with things like schema design, learning about role level security, because it ultimately like if you try and abstract that stuff, it gets kind of crappy and maybe not such a great experience
Jeremy Jung 00:14:26 To make sure I understand correctly. So you have GoTrue, which is a Netlify Open Source project, that GoTrue project creates some tables in your database that has, like, you mentioned the tokens, the different users. Somebody makes a request to GoTrue. Like here’s my username, my password GoTrue gives them back a JWT. And then from your front end, you send that JWT to the PostgREST endpoint. And from that JWT, it’s able to know which user you are and then uses PostgRESTís built in row level security to figure out which rows you’re allowed to bring back. Did I get that right?
Ant Wilson 00:15:10 That is pretty much exactly how it works. And it’s impressive that you got that without looking at a single diagram. Yeah and obviously we provide a client library Supabase JAS, which actually does a lot of this work for you. So you don’t need to manually attach the JWT in a header. If you’ve authenticated with Supabase JAS, then every request sent to Postgres after that point, the header will just be attached automatically. And you’ll be in a session as that user.
Jeremy Jung 00:15:42 And the users that we’re talking about. When we talk about PostgRESTís row level security, are those actual users in Postgres? Like if I was to log in with Psql, I could actually log in with those users?
Ant Wilson 00:15:58 They’re not, you could potentially structure it that way, but it would be more advanced. It’s basically just users in the author users table, the way it’s currently done.
Jeremy Jung 00:16:08 I see. And Postgres has that row level security is able to work with that table. You don’t need to have actual Postgres users?
Ant Wilson 00:16:18 Exactly. And it’s basically during complete. I mean, you can write extremely complex or policies. You can say, you know, only give access to this particular Admin group on a Thursday afternoon between 6 and 8 PM. You can get really as fancy as you want.
Jeremy Jung 00:16:36 Is that all written in SQL or are there other languages they allow you to use?
Ant Wilson 00:16:41 Yeah. The default is plain SQL within Postgres itself. You can use, I think you can use, like there’s a Python extension. There’s a JavaScript extension, which is I think it’s a subset of JavaScript. I mean, this is the thing with PostgREST. It’s super extensible and people have probably got all kinds of interpreters, so you can use whatever you want, but the typical user will just use SQL.
Jeremy Jung 00:17:06 Interesting. And that applies to logic in general, I suppose, where if you were writing a Rails application, you might write Ruby. If you’re writing a Note application, you write JavaScript, but you’re saying in a lot of cases with Postgres, you’re actually able to do what you want to do, whether that’s serialization or mapping objects, do that all through SQL?
Ant Wilson 00:17:30 Yeah, exactly. And then obviously, like there’s a lot of awesome other stuff that PostgREST has like this PostGIS, which if you’re doing GEO, if you’ve got like a GEO application, it’ll load it up with GEO types for you, which you can just use. If youíre doing like encryption decryption, we just added PG libsodium, which is a new and awesome cryptography extension. And so you can use all of these, these all add like functions, like SQL functions, which you can kind of use in any part of the logic or in the role level policies. Yeah.
Jeremy Jung 00:18:04 And something I thought was a little unique about PostgREST is that I believe it’s written in Haskell, is that right?
Ant Wilson 00:18:11 Yeah, exactly. And it makes it fairly inaccessible to me as a result. But the good thing is it’s got a thriving community of its own and you know, and there’s people who contribute probably because it’s written in Haskell and it’s just a really awesome project and it’s an excuse to contribute to it. But yeah, I think I did probably the intro course, like many people and beyond that, it’s just, yeah. Kind of inaccessible to me.
Jeremy Jung 00:18:37 Yeah. I suppose that’s the trade off, right? You have a really passionate community about like people who really want to use Haskell and then you’ve got the, I guess the group like yourselves that looks at it and goes, oh, I don’t know about this.
Ant Wilson 00:18:51 I would love to have the time to invest in it. Not practical right now.
Jeremy Jung 00:18:55 You talked a little bit about the GoTrue project from Netlify. I think I saw on one of your blog posts that you actually forked it. Can you sort of explain the reasoning behind doing that?
Ant Wilson 00:19:06 Yeah, initially it was because we were trying to move extremely fast. So we did Y Combinator in 2020. And when you do Y Combinator, you get like a group partner, they call it one of the partners from YC and they add a huge amount of external pressure to move very quickly. And our biggest feature that we were working on in that period was off. And we just kept getting the question of like, when are you going to ship off? You know, and every single week we’d be like, we’re working on it, we’re working on it. And one of the ways we could do it was we just had to iterate extremely quickly and we didn’t really have the time to upstream things correctly. And actually like the way we use it in our stack is slightly differently. They connected to MySQL, we connected to Postgres. So we had to make some structural changes to do that. And the dream would be now that we spend some time, upstreaming a lot of the changes. And hopefully we do get around to that. But the pace at which we’ve had to move over the last year and a half has been kind of scarier. And that’s the main reason, but you know, hopefully now we’re a little bit more established. We can hire some more people to just focus on, GoTrue and bring in the two forks back together.
Jeremy Jung 00:20:22 Yeah. It’s just a matter of, like you said, the speed, I suppose, because the PostgREST you chose to continue working off of the existing Open Source project, right?
Ant Wilson 00:20:35 Yeah exactly. And I think the other thing is it’s not a major part of Netlifyís business, as I understand it. I think if it was, and if both companies had more resource behind it, it would make sense to obviously focus on the single code base. But I think both companies don’t contribute as much resource as we would like to, but for me, it’s one of my favorite parts of the Stack to work on because it’s written and GO and I kind of enjoy how it all fits together. So yeah. I like to dive in there.
Jeremy Jung 00:21:07 What about GO or what about how it’s structured? Do you particularly enjoy about that part of the project?
Ant Wilson 00:21:13 So I actually learned, GO through GoTrue and I have like a Python and C++ background. And I hate the fact that I don’t get to use Python and C++ rarely in my day-to-day job. It’s obviously a lot of type script. And then when we inherited this code base, it was kind of, as I was picking it up, it just reminded me a lot of the things I loved about Python and C++ and the tooling around it as well. I just found to be exceptional. So, you know, you just do like a small amount of config and it makes it very difficult to write bad code, if that makes sense. So the compiler will just boot you back with you, try and do something silly, which isn’t necessarily the case with JavaScript. I think TypeScript is a little bit better now, but it just reminded me a lot of my Python and C days.
Jeremy Jung 00:22:01 Yeah. I’m not too familiar with GO, but my understanding is that there’s a formatter, that’s a part of the language, so there’s kind of consistency there. And then the language itself tries to get people to build things in the same way or, or maybe have simpler ways of building things. I don’t know. Maybe that’s part of the appeal.
Ant Wilson 00:22:21 Yeah, exactly. And the package manager as well is great. It just does a lot of the importing automatically and makes sure like all the, the declarations at the top are formatted correct and are definitely there. So yeah, just all of that tool chain is just rarely easy to pick up.
Jeremy Jung 00:22:40 Yeah. And I think compiled languages as well, when you have the static type checking by the compiler, you know, not having things blow up and run times. It’s just such a big relief. At least for me in a lot of cases,
Ant Wilson 00:22:52 I just love the Dopamine hits of when you compile something and it actually compiles there’s. Yeah, I lose that with working with JavaScript.
Jeremy Jung 00:23:01 For sure. One of the topics you mentioned earlier was how Supabase provides real time database updates, which is something that as far as I know, is not natively a part of Postgres. So I wonder if you could explain a little bit about how that works and how that came about.
Ant Wilson 00:23:19 Yeah. So PostgREST, when you add replication databases, the way it does it is it writes everything to this thing called the writer head log, which is basically all the changes that are going be applied to the database. And when you connect like a replication database, it basically streams that log across. And that’s how the replica knows what changes to add. So we wrote a server which basically pretends to be a Postgres replica, receives the Write-Ahead Log, encodes it into Json, and then you can subscribe to that server over web sockets. And so you can choose whether to subscribe, to changes on a particular schema or a particular table or particular columns, and even do a quality matches on rows and things like this. And then we recently added the role level security policies to the real time stream as well. So that was something that took us a while to cause it, it was probably one of the largest technical challenges we’d faced. But now that it’s in the real time stream is fully secure and you can apply the same policies that you apply over the crude API as well.
Jeremy Jung 00:24:28 So for that part, did you have to look into the internals of Postgres and how it did its row level security and try to duplicate that in your own code?
Ant Wilson 00:24:37 Yeah, pretty much. I mean, it’s fairly complex and there’s a guy on our team who, well, for him, it didn’t seem as complex, let’s say, but yeah, that’s pretty much it. It’s just a lot of, it’s effectively a SQL, a Postgres extension itself, which interprets those policies and applies to the head log.
Jeremy Jung 00:24:57 And this piece that you wrote that’s listening to the Write-Ahead Log, what was it written in and how did you choose that language or that stack?
Ant Wilson 00:25:05 Yeah, that’s written in the Elixir framework, which is based on Erlang, very horizontally scalable. So, any applications that you write in Elixir can kind of just scale horizontally the message passing can, you know, go into the billions and it’s no problem. So, it just seemed like a sensible choice for this type of application where you don’t know how large the while is going to be. So, it could just be like a few changes per second. It could be a million changes per second, then you need to be able to scale out. And I think Paul who’s, my co-founder originally, he wrote the first version of it. And I think he wrote it as an excuse to learn Elixir, which is probably how Postgres ended up being Haskell I imagine. But it’s meant that the Elixir community is still like relatively small, but it’s a group of like very passionate and very highly skilled developers. So, when we hire from that pool, everyone who comes onboard is just like just really good and really enjoys working with Elixir. So, it’s been a good source for hires as well. Just using those tools.
Jeremy Jung 00:26:48 With a feature like this, I’m assuming it’s where somebody goes to their website. They make a web socket connection to your application and they receive the updates that way. Have you seen how far you’re able to push that in terms of connections, in terms of throughput, things like that?
Ant Wilson 00:27:06 Yeah. I don’t actually have the numbers at hand, but we have a team focused on obviously maximizing that, but yeah, don’t have those numbers right now.
Jeremy Jung 00:27:16 One of the last things you’ve got on your website is a storage product and I believe it’s written in TypeScript. So I was curious, we’ve got PostgREST, which is in Haskell. We’ve got GoTrue and GO, we’ve got the real time database part in Elixir. And so with storage, how did we finally get to TypeScript?
Ant Wilson 00:27:36 Well, the policy we kind of landed on was best tool for the job. Again, the good thing about being an Open Source is we’re not resource constrained by the number of people who are in our team. It’s by the number of people who are in the community and willing to contribute. And so for that, I think one of the guys just went through a few different options. Like we could have went with, GO just to keep it in line with a couple of the other APIs, but we just decided, you know, a lot of people, well, everyone in the team like TypeScripts, kind of just a given. And again, it was kind of down to speed. Like what’s the fastest, we can get this up and running. And I think if we use TypeScripts, it was the best solution there, but we just always go with whatever is best. We don’t worry too much about the resources we have. Because the Open Source community has just been so great in helping us build Supabase and building Supabase is like building like five companies at the same time actually, because each of these vertical stacks could be its own startup, like the OT stack and the storage layer and all of this stuff. And you know, each of those have its own dedicated team. So yeah. So we’re not too worried about the variation in languages.
Jeremy Jung 00:28:51 And the storage layer, is this basically a wrapper around S3 or like, what is that product doing?
Ant Wilson 00:28:59 Yeah, exactly. It’s wrapper around S3. It would also work with all of the S3 compatible storage systems. There’s a few Backblaze and a few others. So if you wanted to self-host and use one of those alternatives, you could, we just have everything in our own S3 buckets inside of AWS. And then the other awesome thing about the storage system is that because we store the metadata inside of Postgres. So basically the object tree of what buckets and folders and files are there, you can write your role level policies against the object tree. So you can say this user should only access this folder and its children, which was kind of, kind of an accident. We just landed on that. But it’s one of my favorite things now about writing applications and supervisors is the role of policy is kind of away everywhere.
Jeremy Jung 00:29:53 Yeah, it’s interesting. It sounds like everything, whether it’s the storage or the authentication, it’s all comes back to Postgres, right? It all, it’s using the row level security. It’s using everything that you put into the tables there and everything’s just kind of digging into that to get what it needs.
Ant Wilson 00:30:12 Yeah. And that’s why I say we are a database company. We are a Postgres company. We’re all in on Postgres. We got asked in the early days, oh, well, would you also make it MySQL compatible or compatible with something else? And, but the amount of features Postgres has, if we just like continue to leverage them, then it just makes the stack way more powerful than if we tried to go thin across multiple different databases.
Jeremy Jung 00:30:42 And so that kind of brings me to, you mentioned how you’re Postgres companies. So when somebody signs up for Supabase, they create their first instance. What’s happening behind the scenes? Are you creating a Postgres instance for them in a container, for example, how do you size it? That sort of thing.
Ant Wilson 00:31:01 Yeah. So it’s basically just EC2 under the hood. For us we have plans eventually to be multi-cloud, but again, going down to speed of execution, the fastest way was to just spin off a dedicated instance, a dedicated Postgres instance pay user on EC2, we do also package all of the APIs together in a second EC2 instance, but we’re starting to break those out into clustered services. So for example, you know, not every user will use the storage API, so it doesn’t make sense to run it for every user regardless. So we’ve made that multi-tenant the application code and now we just run a huge global cluster, which people connect through to access the S3 buckets basically. And we have plans to do that for the other services as well. So right now it’s you get two EC2 instances, but over time it’ll be just the Postgres instance. And we wanted to give everyone the dedicated instance because there’s nothing worse than sharing database resource with other users, especially when you don’t know how heavily they’re going to use it, whether they’re going to be bursty. So I think one of the things we just said from the start is everyone gets a Postgres instance and you get access to it as well. You can, you know, use your Postgres connection string to log in from the command and do whatever you it’s yours.
Jeremy Jung 00:32:27 So did I get it right that, when I sign up I create a Supabase account? You’re actually creating an EC2 instance for me specifically. So it’s like every customer gets their own isolated, it’s their own CPU, their own RAM, that sort of thing?
Ant Wilson 00:32:43 Yeah, exactly. And the way we’ve set up the monitoring as well, is that we can expose basically all of that to you in the dashboard as well. So you have some control over like the resource you want to use. If you want a more powerful instance, we can do that. A lot of that stuff is automated. So if someone scales beyond the allocated disc size, the disc will automatically scale up by 50% each time. And we’re working on automating a bunch of these other things as well.
Jeremy Jung 00:33:12 So is it where, when you first create the account, you might create, for example, a micro instance, and then you have internal monitoring tools that see, oh, the CPU’s getting hit pretty hard. So we need to migrate this person to a bigger instance. That kind of thing?
Ant Wilson 00:33:29 Yeah, pretty much exactly.
Jeremy Jung 00:33:30 And is that something that the user would even see or is it the case of where you send them an email and go like, Hey, we notice you’re hitting the limits here. Here’s what’s going to happen.
Ant Wilson 00:33:41 Yeah. In most cases it’s handled automatically. There are people who come in and from day one, they say, here’s my requirements. I’m going to have this much traffic. I’m going to have, you know, hundred thousand users hitting this every hour. And in those cases we will over provision from the start. But if it’s just the self-service case, then it will be start on, you know, a smaller instance and upgrade over time. And this is one of our biggest challenges over the next five years is we want to move to a more scalable Postgres. So Cloud native Postgres. But the cool thing about this is there’s a lot of different companies and individuals working on this and upstreaming it into Postgres itself. So for us, we don’t need to, and we would never want to for Postgres and try and separate the storage and the, the compute, but more, we’re going to fund people who are already working on this so that it gets upstream into Postgres itself. And it’s more Cloud Native.
Jeremy Jung 00:34:44 Yeah. So I think the, like we talked a little bit about how Firebase was the original inspiration and when you work with Firebase, you don’t think about an instance at all, right. You, you just put data in, you get data out. And it sounds like in this case, you’re kind of working from the standpoint of, we’re going to give you this single Postgres instance as you hit the limits, we’ll give you a bigger one, but at some point you will hit a limit of where just that one instance is not enough. And I wonder if you have any plans for that or if you’re doing anything currently to handle that.
Ant Wilson 00:35:21 Yeah. So the medium goal is to do replication at horizontal scaling. We do that for some users already, but we manually set that up. We do want to bring that to the self-serve and model as well, where you can just choose from the start or I want, you know, replicas on these zones and in these different data centers. But then, like I said, the long-term goal is that it’s not based on horizontally scaling a number of instances. It’s just that Postgres itself can scale out. And I think honestly, the rate at which the Postgres community is working, I think we’ll be there in two years. And if we can contribute resource towards that goal, I think, yeah, like we’d love to do that, but for now we’re working on this intermediate solution of what people already do with Postgres, which is, you know, have your replicas to make it highly available.
Jeremy Jung 00:36:13 And with that, I suppose, at least in the short term, the goal is that your monitoring software and your team is handling the scaling up the instance or creating the read replicas. So to the user, it, for the most part feels like a managed service. And then yeah, the next step would be to get something more similar to maybe Amazon’s Aurora, I suppose, where it just kind of, you pay per use, I suppose.
Ant Wilson 00:36:42 Yeah, exactly. Aurora was kind of the goal from the start. It’s just a shame that it’s proprietary, obviously. I think the world would be a better place if Aurora was Open Source.
Jeremy Jung 00:36:52 Yeah, it sounds like you said, there’s people in the Open Source community that are trying to get there just it’ll take time. So all this about making it feel seamless, making it feel like a serverless experience, even though internally, it really isn’t, I’m guessing you must have a fair amount of monitoring or ways that you’re making these decisions. I wonder if you can talk a little bit about, you know, what are the metrics you’re looking at and what are the applications you have to help you make these decisions?
Ant Wilson 00:37:22 Yeah, definitely. So we started with Prometheus, which is a, you know, metrics gathering tool. And then we moved to VictoriaMetrics, which was just easier for us to scale out. I think soon we’ll be managing like a hundred thousand Postgres databases will have been deployed on Supabase. So definitely some scale. So this kind of tooling needs to scale to that as well. And then we have agents kind of everywhere on each application on the database itself. And we listen for things like the CPU and the RAM and the network IO. We also poll Postgres itself. There’s an extension called pg_stat_statements, which will give us information about what are the intensive queries that are running on that box. So we just collect as much of this as possible, which we then obviously use internally. We set alerts to know when we need to upgrade in a certain direction, but we also have an endpoint where the dashboard subscribes to these metrics as well. So the user themselves can see a lot of this information. And I think at the moment we do a lot of the RAM, the CPU, that kind of stuff, but we’re working on adding just more and more of these observability metrics so people can know, because it also helps with, let’s say you might be lacking an index on a particular table and not know about it. And so if we can expose that to you and give you alerts about that kind of thing, then it obviously helps with the developer experience as well.
Jeremy Jung 00:38:51 Yeah. And it brings me to something that I hear from platform as a service company, where if a user has a problem, whether that’s a crash or a performance problem, sometimes it can be difficult to distinguish between is it a problem in their application or is this a problem in Supabase or, you know, and I wonder how your support team kind of approaches that.
Ant Wilson 00:39:13 Yeah, no, it’s great question. And it’s definitely something we deal with every day. I think because of where we’re at as a company we’ve always seen, like we actually have a huge advantage in that we can provide really good support. So anytime an engineer joins Supabase, we tell them your primary job is actually frontline support. Everything you do afterwards is secondary. And so everyone does a four hour shift per week of working directly with the customers to help determine this kind of thing. And where we are at the moment is we are happy to dive in and help people with their application code because it helps our engineers learn about how it’s being used and where the pitfalls are, where we need better documentation, where we need education. So that is all part of the product at the moment, actually. And like I said, because we’re not a 10,000 person company, it’s an advantage that we have that we can deliver that level of support at the moment.
Jeremy Jung 00:40:14 What are some of the most common things you see happening? Like, is it, I would expect you mentioned indexing problems, but I’m wondering if there’s any specific things that just come up again and again?
Ant Wilson 00:40:25 I think like the most common is people not batching their requests. So they write an application which, you know, needs to pull 10,000 rows and they send 10,000 requests, that’s a typical one for people just getting started maybe. And then I think the other thing we faced in the early days was people storing blobs in the database, which we obviously solved that problem by introducing file storage. But people would be trying to store, 50 megabytes, 100 megabytes files in Postgres itself and then asking why the performance was so bad. So I think we’ve mitigated that one by introducing the blob storage.
Jeremy Jung 00:41:06 And you mentioned you have over a hundred thousand instances running. I imagine there have to be cases where an incident occurs, where something doesn’t go quite right. And I wonder if you could give an example of one and how it was resolved.
Ant Wilson 00:41:24 Yeah, it’s a good question. We’ve improved the systems since then, but there was a period where our real time server wasn’t able to handle really large writer head logs. So there was a period where people were just making tons and tons of requests and updates to Postgres and the real time subscription were failing. But like I said, we have some rarely great Elixir Devs on the team. So they were able to jump on that fairly quickly. And now, you know, the application is way more scalable as a result. And that’s just kind of how the support model works is you, you have a period where everything is breaking and then you just, you know, tackle these things one by one.
Jeremy Jung 00:42:07 Yeah. I think anybody at a, an early startup is going to run into that. Right? You put it out there and then you find out what’s broken, you fix it and you just get better and better as it goes along.
Ant Wilson 00:42:18 Yeah. And the funny thing was this model of deploying EC2 instances, we had that in like the first week of starting Supabase, just me and Paul, and it was never intended to be the final solution. We just kind of did it quickly to get something up and running for our first handful of users, but it scaled surprisingly well. And actually the things that broke as we started to get a lot of traffic and a lot of attention with just silly things. Like we give everyone their own Subdomain when they start a new project. So you’ll have projectref.subbase.in.co and the things that we’re breaking were like, you know, we ran out of Subdomain with our DNS provider and those things always happen in periods of like intense traffic. So we were on the front page of hacking news, or we had a tech crunch article, and then you discover that you’ve ran out of Subdomains and the last thousand people couldn’t deploy their projects. So that’s always a fun challenge because you are then dependent on the external provider as well and their support systems. So yeah, I think we did a surprisingly good job of putting in good infrastructure from the staff, but yeah, all of these crazy things just break when obviously when you get a lot of traffic.
Jeremy Jung 00:43:38 Yeah. I find it interesting that you mentioned how you started with creating the EC2 instances. It turned out that just worked. I wonder if you could walk me through a little bit about how it worked in the beginning, like, was it the two of you going in and creating instances as people signed up and then how it went from there to where it is today?
Ant Wilson 00:43:58 Yeah. So there’s a good story about our first user actually. So me and Paul used to contract for a company in Singapore, which was a, an NFT company. And so we knew the lead developer very well, and we also still had the Postgres credentials on our own machines. And so what we did was we set up the, and the other funny thing is, when we first started, we didn’t intend to host the database. We thought we were just going to host the applications that would connect to your existing Postgres instance. And so what we did was we hooked up the applications to the Postgres instance of this startup that we knew very well. And then we took the bus to their office and we sat with the lead developer and we said, look, we’ve already set this thing up for you. What do you think? And you know, when you think like, ah, we’ve got the best thing ever, but it’s not until you put it in front of someone and you see them, you know, contemplating it. And you’re like, oh, maybe it’s not so good. Maybe we don’t have anything. And we had that moment of panic of like, oh, maybe we just don’t maybe this isn’t great. And then what happened was he didn’t like users. He didn’t become a Supabase user. He asked to join the team.
Jeremy Jung 00:45:12 Nice.
Ant Wilson 00:45:13 So that was a good moment where we thought, okay, maybe we have got something, maybe this isn’t terrible. So he became our first employee.
Jeremy Jung 00:45:20 And so that case was, you know, the very beginning, you said everything up from scratch now that you have people signing up and you have, you know, I don’t know how many signups you get a day. Did you write custom infrastructure or applications to do the provisioning or is there an Open Source project that you’re using to handle that?
Ant Wilson 00:45:40 Yeah, it’s actually mostly custom, you know, AWS does a lot of the heavy lifting for you. They just provide you with a bunch of API endpoints. So a lot of that is just written in TypeScript, fairly straightforward. And like I said, you never intended to be the thing that lasts two years into the business, but it’s just scaled surprisingly well. And I’m sure at some point we’ll swap out for some, I donít know, orchestration tooling, like Pulumi or something like this, but actually what we’ve got just works really well because we’re so into Postgres, our queuing system is a Postgres extension called pg-boss. And then we have a fleet of workers, which are we manage on ECS. So it’s just a bunch of VMs basically, which just subscribed to the queue, which lives inside the database and just performs all the, whether it be a project creation, deletion modification, whole suite of these things. Yeah.
Jeremy Jung 00:46:36 Very cool. So even your provisioning is based on Postgres.
Ant Wilson 00:46:40 Yeah, exactly.
Jeremy Jung 00:46:42 I guess in that case, I think, did you say you’re using the Write-Ahead Log there too in order to get notifications?
Ant Wilson 00:46:49 We do use real time. This is the fun thing about building Supabases. We use Supabase to build Supabase. A lot of the features start with things that we build for ourselves. So the observability features, we have a huge logging division. So we were very early users of a tool called Logflare, which is also written Elixir. It’s basically a log sync backed up by big query and we loved it so much. And we became like super Logflare power users that it was kind of, we decided to eventually acquire the company. And now we can just offer Logflare to all of our customers as well as part of using Supabase. So you can query your logs, get really good business intelligence on what your users consuming from your database,
Jeremy Jung 00:47:36 The Logflare you’re mentioning though, you said that that’s a log sync and that that’s actually not going to Postgres, right? That’s going to a different type of store?
Ant Wilson 00:47:44 Yeah. That is going to BigQuery actually.
Jeremy Jung 00:47:46 Oh, BigQuery. Okay.
Ant Wilson 00:47:48 Yeah. And maybe eventually, and this is the cool thing about watching the Postgres progression is it’s bringing like transactional and analytical databases together. So it’s traditionally been a great transactional database, but if you look at a lot of the changes that have been made in recent versions, it’s becoming closer and closer to an analytical database. So maybe at some point we’ll use it, but yeah. But BigQuery works just great.
Jeremy Jung 00:48:14 Yeah. It’s interesting to see, like I know that we’ve had Episodes on different extensions to Postgres where I believe they change out how the storage works. So there’s, yeah, it’s really interesting how it’s this one database, but it seems like it can take so many different forms.
Ant Wilson 00:48:31 It’s just so extensible and that’s why we’re so bullish on it because okay. Maybe it wasn’t always the best database, but now it seems like it is becoming the best database and the rate of which it’s moving is like, where is it going to be in five years? And we’re just, yeah, we’re just very bullish on Postgres. As you can tell from the amount of mentions it’s had in this episode.
Jeremy Jung 00:48:53 Yeah. We’ll have to count how many times it’s been said. I’m sure it’s up there. Is there anything else we missed or think you should have mentioned?
Ant Wilson 00:49:02 No. Some of the things we are excited about are cloud functions. So it’s the thing we just get asked for the most. Anytime we post anything on Twitter, you’re guaranteed to get a reply, which is like when functions. And we’re very pleased to say that it’s almost there. So that will hopefully be a really good developer experience. We’re also, we launched like a GraphQL Postgres extension where the resolver lives inside of Postgres and that’s still in early alpha, or I think I’m quite excited for when we can start offering that on the platform as well. People will have that option to use GraphQL instead of, or as well as the restful API,
Jeremy Jung 00:49:45 The common thread here is that Postgres, you’re able to take it really, really far. Right. In terms of scale up, eventually you’ll have the read replicas. Hopefully you’ll have some kind of, I don’t know what you would call Aurora, but it’s almost like self-provisioning, maybe I’m not sure what, how you’d describe it. But I wonder as a company, like we talked about BigQuery, right? I wonder if there’s any use cases that you’ve come across, either from customers or in your own work where you’re like, ah, I just can’t get it to fit into Postgres.
Ant Wilson 00:50:19 I think like, not very often, but sometimes we will respond to support requests and recommend that people use Firebase. So if they rarely do have like large amounts of unstructured data, which is, you know, document storage is kind of perfect for, then we’ll just say, you know, maybe you should just use Firebase. So we definitely come across things like that. And like I said, we love Firebase, so we’re definitely not trying to destroy it as a tool. I think it has its use cases where it’s an incredible tool. And provides a lot of inspiration for what we’re building as well.
Jeremy Jung 00:50:56 All right. Well, I think that’s a good place to wrap it up, but where can people hear more about you hear more about Supabase?
Ant Wilson 00:51:04 Yeah. So Supabase is at superbase.com. I’m on Twitter @AntWilson. Supabase is on Twitter @Supabase. Just hit us up, we’re quite active on there. And then definitely check out the repo github.com/Supabase. There’s lots of great stuff to dig into as we discussed, there’s a lot of different languages, so kind of whatever you are into, you’ll probably find something where you can contribute.
Jeremy Jung 00:51:28 Yeah, and we sort of touched on this, but I think everything we’ve talked about with the exception of the provisioning part and the monitoring part is all open source, is that correct?
Ant Wilson 00:51:39 Yeah. And hopefully everything we build moving forward, including functions and GraphQL will continue to be Open Source.
Jeremy Jung 00:51:46 And then I suppose the one thing I did mean to touch on is what is the license for all the components you’re using that are Open Source?
Ant Wilson 00:51:55 It’s mostly Apache2 or MIT. And then obviously Postgres has its own Postgres license. So, as long as it’s one of those, then we’re not too precious. As I said, we inherit a fair amount of projects or we contribute to and adopt projects. So as long as it’s just very permissive, then we don’t care too much.
Jeremy Jung 00:52:16 As far as the projects that your team has worked on, I’ve noticed that over the years, we’ve seen a lot of companies move to things like the business source license or there’s all these different licenses that are not quite so permissive. And I wonder what your thoughts are on that for the future of your company and why you think that you’ll be able to stay permissive.
Ant Wilson 00:52:39 Yeah. I really, really, really hope that we can stay permissive forever. It’s a philosophical thing for us. You know, when we started the business, it’s, we’re just very, as individuals into the idea of Open Source. And if AWS come along at some point and offer hosted Supabase on AWS, then it’ll be a signal that we’re doing something right. And at that point we just need to be the best team to continue to move Supabase forward. And if we are that, we will be there then hopefully we will never have to tackle this licensing issue.
Jeremy Jung 00:53:17 All right. Well, I wish you luck.
Ant Wilson 00:53:19 Thanks for having me.
Jeremy Jung 00:53:21 This has been Jeremy Jung for Software Engineering Radio.
[End of Audio]