For as long as I can remember I have been intimately involved with data.
Moving it, cleaning it, providing it to the business; either pre-aggregated for immediate consumption or the raw stuff for further analysis.
Always I have been concerned with automating the pipeline so that the latest data is available in a timely fashion.
A lot of my recent work has been with traditional tools like SSIS, SSRS and SSAS, but I come from a background of application development. I have a long history of using code to get data out of silos and into the business where it matters.
Today my languages of choice are C# and T-SQL. Tomorrow it could be Python or PowerShell. Different problems require different approaches.
I have often argued that the Kimball model, though useful has been broken by new technology. If I have Vertipaq, then why must I normalise my data so?
There are dozens of data storage solutions today: The technology just gets better and better.
But whether you choose Microsoft, Amazon or Google the old problem of working out the right question to ask, just doesn’t go away. Once you work that out, most tools can provide an answer. And SQL is still the King.
I read the Phoenix Project in 2014. Two months later I persuaded the Ops director at my then clients to let me trial a new way of delivering our SQL Server releases, bringing the developers and DBAs together. It worked. I like creating positive change.
And I like mentoring people. There’s nothing so satisfying as encouraging a promising developer to realise their talent.
And of course, there’s the day to day too. But with data, a lot of that can be automated. Leaving time for talking with colleagues, finding synergies and searching for better ways of doing things.
I don’t like being the smartest person in the room. Well maybe sometimes, just not for not too long.