Hence the section titled "Boiling the Ocean".
In case you missed the point, I am highlighting that although "storage is cheap" and "joins are expensive" don't forget the reasons why techniques like dimensional modelling was invented for, they're very much relevant today.
Data modellers recognise this, data engineers (through no fault of their own) have forgone this knowledge in favour of pipeline automation and because "models" are the outcome of dbt jobs their roles are now called "analytics engineers".
The article does go into recognising that traditional data modelling (the way it has been done for decades) does need to evolve; business units cannot wait for a model to be ready before being able to populate it. OBTs are ok for starting off, but when your data hub beings to scale you're going to need a better method to manage/model that data.
Domain-driven analytics is nothing new, I do highlight references to other articles I have published detailing how this concept is easily portable when combining with an enterprise data model. You can eliminate the waste (muda) by ensuring you are loading to a data model designed for passive integration on business keys. One of the images I designed in this article highlights this; it's what I called the "shared kernel" of your data vault model (in DDD parlance).
Thanks for highlighting microservices, this article does mention it, search for "Boiling the Ocean".
I am doing a co-presentation on the work I did with a customer at this year's Snowflake Summit, will I see you there?