The data science triangle: a model to develop great data teams

There’s no such thing as a standard end-to-end data science journey.

With organisations across every industry facing a series of specific challenges when it comes to managing their data strategy, each has their own specific requirements. Tech stacks, datasets, and objectives will vary depending on the size and strengths of each business – which means that even the best data scientists won’t necessarily be equipped to handle every challenge.

Leading organisations are those developing strong data science teams that can adapt regardless of the circumstances. That often requires professionals from different backgrounds, with three archetypes coming to mind.

Firstly, there’s the programmer-turned-data scientist, who specialises in the language of data for programming. Then there’s the mathematician turned data scientist who is best placed to analyse quantitative data via statistical methods. Finally, you’ve got the ‘texpert’ data scientist, combining technological literacy with technical expertise to grapple with data.

Think of each type of data scientist as one side of a triangle. With so much expertise to absorb and so many different scenarios to navigate, it’s extremely difficult for one individual to cover every base. With that in mind, organisations would ideally include all three archetypes when building futureproof data teams.

The Programmer

Data science is about analysing information for insights and preparing it for more advanced applications, such as machine learning (ML). This can’t be done without strong coding expertise, with various languages and associated libraries being a necessity for data-related tasks.

Python is an industry-standard language for data science that’s easy to learn, and contains important libraries like Pandas for data analysis, Matplotlib for data visualisation, and Scikit-learn for ML. R is also important for a wide array of functions allowing users to analyse and develop statistical software. Furthermore, there are recent languages such as Julia – which accelerates the data analysis with its natural speed – surfacing and gaining traction in the data science community.

Programmers are also uniquely well-positioned to code UDFs (User Defined Functions). These scripts allow organisations to programme their own analysis and perform other operations within analytics databases, enabling them to address problems that can’t be solved through a sole reliance on SQL (Structured Query Language). While SQL can be used to administer databases and retrieve information, it’s less versatile without additional coding functions programmable via UDFs.

Without context of this kind, data science becomes virtually impossible. No matter the mathematicians in your team, or the scientific brains capable of interpreting the results, you need to be able to manipulate data – and with increasing size of data, this manipulation needs to happen at scale.

The Mathematician

Of course, you also can’t afford to make decisions before you’ve got the facts. This means you’ll need analysts to interpret quantitative data, whether it’s sales figures, inventory levels, or customer satisfaction surveys. This isn’t lost on a range of organisations that are leveraging data for higher revenues, personalised customer interactions, and more.

To realise these benefits, you’ll need to identify which business questions need answering. Then you’ll need analysts who can query the information from your data warehouse before using statistical methods to find trends, correlations, outliers, and variations that tell a story. The results of this analysis will confirm whether the data answered your original query, and what recommendations can be made for the overall business strategy.

Ex-mathematicians can often have the perfect mind and background for these methods and processes. Their ability to interrogate and manipulate data points in an empirically sound manner is fundamental to meaningful results – and relies on an education that technicians or programmers may not have shared.

The Technician

Becoming a data-led organisation requires not only the right tech stack to manage data, create visualisations, and train ML algorithms, but also to scale the stack with the requirements and size of data. It extends to your complete data engineering pipeline, Business Intelligence (BI) tools, and the way in which models are deployed. That means it’s also important to include very technically minded data scientists on the team.

Most importantly, you’ll need data warehouse specialists who can work with any infrastructure, whether it’s an on-premise, cloud-based, or hybrid solution. In particular, data professionals working with hybrid software will benefit from the flexibility to address various data scenarios in any operating environment, whether that means storing sensitive data locally, or benefitting from greater scalability on the cloud.

The best analytics platforms will also interface with ETL (Extract, Transform, Load) tools to collect data from various sources, before transforming it through cleaning and deduplication, and loading it into the target platform. Data can then be prepared and data models loaded for front-end visualisation or AI/ML work, enabling BI developers and ML engineers within the team to run faster analytics, and train algorithms for a variety of use cases.

With so many potential platforms and applications to navigate, a technical expert is imperative to progressing data science initiatives at meaningful speed.

Make your infrastructure adaptable

Growing data volumes and a multiplicity of use cases have made data science increasingly complex. It’s also quite rare to encounter a data scientist with the skill sets of all three archetypes from our triangle. That means organisations need teams with a mixture of skill sets, and the right solution that can assist data scientists in the specific work they perform, regardless of the project or phase of the organisation’s analytics journey. Such composition of teams helps building scalable businesses, robust and self-reliable data teams.

The right data science platform will enable teams to work with data within a database platform, without the limitations of specific programming requirements, while operationalising data science at speed. With correct planning and execution, these platforms will empower programmers, mathematicians, and technicians to get maximum value from their data with minimal effort.


About the Author

Eswar Nagireddy is Senior Product Manager Data Science at Exasol. Exasol is passionate about helping companies to run their businesses smarter and drive profit by analyzing data and information at unprecedented speeds. The company develops the world’s fastest in-memory database for analytics and data warehousing, and offers first-class know-how and expertise in data insight and analytics. The in-memory analytic database is the first to combine in-memory, columnar compression and massively parallel processing, and is proven to be the world’s fastest topping the list in the TPC-H Benchmark tests for performance.

Featured image: ©overrust

more insights