Tidal Migrations is now Tidal!


February 2021 Newsletter, Tidal

Tidal feature releases and announcements in the February 2021 newsletter

clouds clouds
February 2021 Newsletter, Tidal

What does next month mark the anniversary of? Nothing, don’t worry about it. We’re here to fill your brain instead with all the cool things the cloud has been doing over the past 28 days.



Do you find us charming? We’re flattered. And we want you on our team.

We’re hiring at Tidal, and if you want to be a part of what we’ve been building, see if you might be a good fit for one of our 3 roles as our newest “Hacker”.


We’ve got a double-feature this newsletter.

There’s still time to join our All-Virtual Migration Enablement Workshop, kicking off Tuesday, March 2, 2021! Register now and join us from home while our own David Colebatch walks you through the first 3 phases of the cloud migration process.

And if you can’t make this one, catch our next one on May 4th, 2021.

You can also sign up for a one-on-one if a dedicated, private workshop, with our team is more your style.


When you sit down to plan your migration, you may be one to dread the budget.

To help make this a little less painful, we’re proud to introduce our new Calculator in beta.

This tool provides rapid financial assessment for Rehost migrations, including savings forecasts for more transformative cloud migrations like Replatform and Refactor. This way you have what you need to build your business case and confidently pursue a more transformative migration.

Register if you are interested in trying it out!


“The cloud offers oodles of options for redundancy, and we seem to be able to spend infinitely on that. How do you know where to draw the line?" - Jake, Toronto, Canada

That’s an important question Jake,

Back in a world when everything was on-premise, disk failures had to be planned for and backed up, pushing us to RAID (Redundant Array of Inexpensive Disks). Failure of network switch ports forced us to multi-homed complexity. The potential failure of power supplies meant we needed to consider A & B side power failure and dual power supplies in servers.

When that wasn’t enough, we clustered servers together and replicated application state between hosts in the physical world with cables, and then again we took that concept with us when we virtualized everything.

We turned complexity up to 11 on premise, in a shell game of “find the SPOF” (single point of failure).

Now, the cloud abstracts all that complexity away and provides high levels of redundancy to us in a package we can consume at pennies per million web requests.

With this base layer of redundancy built into the architecture on a global scale, the decision on how much more to spend needs to be taken on an application-by-application basis.

We draw the line on how much to spend based on the value each application provides. We don’t want to spend the same $300k per year on global HA (high availability) for our e-commerce app as we do on our tweet drafting app that’s used by the marketing intern team.

The financial impact of each application should be identified and recorded during the discovery phase of assessing your applications and leveraged throughout your cloud journey - especially as you go and implement FinOps practices like our expert Shana is blogging about right now.

Hope this helps, and keep those questions coming!



That’s all for February, and if you’ve had a good time with us this newsletter, let us know by sharing on social media (links below) and keep your friends up to date.

And in case you were caught in the New Year haze, check out our January newsletter if you missed it.

Stay united and see you next month!