Doltgres Now Supports Triggers
We're continuing to make progress on DoltgreSQL, which is a version of Dolt built to be a drop-in replacement for PostgreSQL. Dolt is a MySQL compatible database that is built from the ground up with Git-influenced version control features. This means that you can use branch, merge, diff, and more with your data and schema. We recently launched Doltgres into Beta, and we're continuing to make progress as our next milestone is the 1.0 release. Today, I'm excited to share that we've finally landed support for triggers!
What are triggers?
Triggers can be used for quite a few purposes, but the best way to think of them are as functions that execute against DML statements.
DML are used to modify the contents of tables, and examples are INSERT
, UPDATE
, and DELETE
.
One use of a trigger may be to ensure that data conforms to some specification (or expectation) before it's actually written to the table. For example, let's say that you have a table of addresses, and these addresses all follow the following format:
STREET_NUMBER STREET_NAME CITY, STATE ZIP_CODE
You could create a trigger that occurs before every INSERT
and UPDATE
, which may return an error if the address does not adhere to the format, or it may even be able to fix the address, ensuring that the correct address format is always written.
Another set of triggers may be used to track the time that changes were made to a table for each user.
You could create a set of triggers that occur after every INSERT
, UPDATE
, and DELETE
, and they write the current time alongside the current user to some tracking table.
You could then reference that tracking table to see who modified that table at which times.
Triggers can also be optionally enabled, which is accomplished by attaching a condition to them. Only when the condition succeeds will the trigger actually execute, and through chaining multiple triggers together, you can create some exceptionally complex functionality. In fact, Postgres itself implements foreign keys using triggers (and a little bit of internal help to deal with potential issues like race conditions), so the possibilities are endless.
Example
I'll assume that you've started a fresh Doltgres server, and logged in with the default postgres
user with the default password of 'password'
.
We strongly encourage everyone with a public-facing server to change this password in a real application, but for this example we'll leave it as-is.
In addition, the default database is postgres
, so ensure that you connect to that one for this example.
You can use your favorite Postgres client to connect to Doltgres, as we are completely wire-compatible (after all, our goal is to be a drop-in replacement for existing Postgres servers).
First, let's create two identical tables:
postgres=> CREATE TABLE main (pk INT4 PRIMARY KEY, v1 TEXT);
CREATE TABLE
postgres=> CREATE TABLE secondary (pk INT4 PRIMARY KEY, v1 TEXT);
CREATE TABLE
We recently added support for custom functions via PL/pgSQL, so let's create a function that will serve as the trigger function:
postgres=> CREATE FUNCTION insert_into_main_func() RETURNS TRIGGER AS $$
BEGIN
INSERT INTO secondary VALUES (NEW.pk+100, NEW.v1||'_secondary');
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION
Within a trigger function, we have access to the values NEW
(for INSERT
and UPDATE
) and OLD
(for UPDATE
and DELETE
), which represent the new and old rows respectively.
We take advantage of this to insert our NEW
row into our secondary
table, while adding 100
to the primary key, and appending '_secondary'
to our string.
We'll now create a trigger and insert a few values:
postgres=> CREATE TRIGGER insert_into_main BEFORE INSERT ON main FOR EACH ROW EXECUTE FUNCTION insert_into_main_func();
CREATE TRIGGER
postgres=> INSERT INTO main VALUES (1, 'hello'), (2, 'world');
INSERT 0 2
We can fetch the contents of our main
and secondary
tables to see their contents:
postgres=> SELECT * FROM main;
pk | v1
----+-------
1 | hello
2 | world
(2 rows)
postgres=> SELECT * FROM secondary;
pk | v1
-----+-----------------
101 | hello_secondary
102 | world_secondary
(2 rows)
Our secondary
table contains the modified data of main
, even though we only used a single INSERT
statement into main
.
Since the trigger contains an INSERT
into secondary
, we're able to keep it up-to-date so long as we're only using INSERT
statements on main
.
What if we want to also handle UPDATE
?
We can make another trigger for that.
We'll create a new function that will fire after UPDATE
, and we'll take into account the modified format that secondary
expects.
postgres=> CREATE FUNCTION update_on_main_func() RETURNS TRIGGER AS $$
BEGIN
DELETE FROM secondary WHERE pk = OLD.pk+100;
INSERT INTO secondary VALUES (NEW.pk+100, NEW.v1||'_updated');
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION
postgres=> CREATE TRIGGER update_on_main BEFORE UPDATE ON main FOR EACH ROW EXECUTE FUNCTION update_on_main_func();
CREATE TRIGGER
postgres=> UPDATE main SET pk = pk+10 WHERE pk = 1;
UPDATE 1
Now we can fetch the contents of our main
and secondary
tables again:
postgres=> SELECT * FROM main;
pk | v1
----+-------
2 | world
11 | hello
(2 rows)
postgres=> SELECT * FROM secondary;
pk | v1
-----+-----------------
102 | world_secondary
111 | hello_updated
(2 rows)
Our secondary
table successfully handles UPDATE
statements too!
We could continue to expand on this example to check out WHEN
, what happens when multiple triggers are defined for the same DML statement on the same table and more, but I'll leave that as an exercise for the reader.
What's Missing?
The biggest omissions are FOR EACH STATEMENT
, support for TRUNCATE
statements, and deferred timing.
If you have a use-case that requires these, then do let us know and we can prioritize their implementation!
Conclusion
DoltgreSQL continues to improve, and we are excited with both the reception and its future. The addition of triggers means that many more projects "just work", easing the transition from Postgres to Doltgres to make use of the numerous versioning features. If there are any features that are missing that are critical to your workflow, then let us know! You can find us on Twitter/X and chat with us on Discord. We also welcome all issue reports! Thank you for reading the blog post!