How I (Vibe) Coded a SaaS Backup for monday.com to strengthen a client’s security posture

Building my own backup for monday.com

Recently, I got the task to evaluate the IT security setup and assess the associated risks for a small media agency. It didn’t take long to discover that their entire operational workflow was built around monday.com, the chosen project management platform.

This reminded me of a much broader issue that I often see. Many organizations assume that they do not have to bother with data safety when using a SaaS tool – and that is a dangerous misconception.

While it is true that platforms like monday.com (or really any other SaaS provider) will not vanish overnight it is important to note that even the most robust platforms are not immune to outages, human error, or misconfigurations. If something goes wrong – and you never know when something will go wrong – you want to have your own copy of critical data. Especially when your entire business depends on it.

After evaluating this risk, I decided to look for a way to backup monday.com. I explored a few third-party backup providers for monday.com, but none of them really met all the requirements. In the end I decided to build my own custom backup workflow.

Choosing n8n for the custom workflow

The agency had already deployed n8n. Given that monday.com offers a powerful GraphQL API I nearly instantly decided to use n8n for the backup.

Why you always need a backup of your critical data

The assumption that you don’t have to worry about your data when using a SaaS tool is a common misunderstanding. Here it is important to mention the Shared Responsibility Model. While service providers are obligated to ensure infrastructure stability and uptime as well as the overall security of the platform, you as the user are responsible for your own data.

This – of course – is a very simplified explanation of the Shared Responsibility Model.

There are plenty of scenarios where a local backup can save your business hours or days of disruption. Just imagine a deliberate action of a soon-to-be-ex-employee who, on their last day, deletes important information to take revenge.

In this case your SaaS provider can’t help you. But local backups can. Having a backup of your preferred SaaS tool is not paranoia – it’s just good practice.

Backup Scope

If you don’t know what to backup – you can’t do a backup. Of course the first step to perform backups is to define the scope.

When it comes to monday.com this is – more or less – pretty simple, all of the boards inside the different workspaces including column and row values have to be saved.

To summarize, the following data has to be saved:

  • All monday.com boards including tasks and subtasks from every workspace
  • Every column and column value
  • Complex columns that contain monday-docs
  • Task updates and activity logs

Using the monday.com API

When I previously evaluated the monday.com API to find out if it is suitable to perform a backup I noticed that I can pull relevant data of each board. Overall the monday.com API is well documented and even provides a playground which also helped a lot. But still, getting the GraphQL query right was pretty difficult. After trial & error I was able to build a query that returned every column and column value of a board in a single response.

In the end, my GraphQL query looked similar to this:

{ boards(ids: {{ $json.board_id }} ) { workspace_id name columns { title id type } items_page(limit: 100) { items { id name column_values { id text value column { title type } } updates (limit: 1000) { body id created_at creator { name id } } subitems {id name column_values { id text value column { title type } } updates (limit: 1000) { body id created_at creator { name id } } } } } activity_logs (limit:1000) { id event account_id user_id data } } }

Scaling from one board to hundreds

The previous query returns all the relevant data for one board. However that is a problem when there are hundreds of boards that have to be saved. Once again the monday.com API helped me here.

I decided to query all the IDs of available boards inside the tenant and put them into an array. Afterwards, using the n8n Loop Over Items node, I will loop my complex GraphQL query over each board.

Converting data into usable formats

At this stage, the API sends data in a format which is not really suitable to use in a Backup Szenario. Therefore formatting the data is another crucial step.

Before explaining which formats I choose and why I have chosen those I want to once again mention the purpose of this backup.

This backup is not done to restore a monday.com board within minutes. It is done to prevent data loss due to human error, to be able to lookup project progress if monday.com is temporarily not available and in order to be able to migrate to another service provider if the worst case occurs.

Because of those considerations I decided that it is completely suitable to save the data in three different formats.

  • Boards will be saved as .xlsx
  • Task updates will be saved as .txt
  • monday-docs will be saved as .html

The boards will be saved in a simple Excel file which nearly mimics monday.com boards. Updates for each task will be saved in separates .txt files and monday.com docs which are part of the boards will be saved as .html.

Vibe Coding!

As mentioned previously the complex GraphQL query delivers all of the needed data. And, as it should be, the API returns data in a consistent schema. Therefore the conversion of the data to different files can be described as a static, not-that-exiting-process. That makes it perfect for a bit of vibe-coding.

I used ChatGPT-5 Pro to write JavaScript code to process and format the data. This is something everyone with a little bit of prompting can achieve.

In the end I managed to get the following structure for every workspace.

Conclusion

Creating this backup turned out to be a interesting project, and I’m happy with how reliably it runs – the media agency is, too, because it gives them a safety net for the real worst-worst case.

Room for improvement

Right now it’s a full backup each run. In order to reduce used disk space, network bandwidth and execution time a additional step could be introduced, which checks when a board was last edited and then skip the ones that haven’t changed.

Issues I ran into

Subitems

In the first step of the workflow I am requesting all of the board IDs from the monday.com account. Besides the boards the API also returns various other data types which are not really needed.

To filter out types that are not needed I simply used the n8n filter node. If the type of the specific board_id is not equal to board or document the board_id is disregarded.

Screenshot

The monday.com doc is technically a board – therefore it also has a board ID but the type is set to document.

Error: Board has no items

The workflow had issues with boards that had no items. This happens when the API tries to query a board_id where a board does not have any items or if the API tries to query a board_id that has the type document.

For both cases a board backup is not needed. In the first case the board is simply empty – therefore a backup is not needed and in the second case the document will be saved using another method.

To filter out boards without any items I simply checked if the returned array, when querying a board_id, is empty. If empty then disregard.