An important but easy-to-use tool for uncertainty quantification every data scientist should know. Continue reading on Towards Data Science »
Every load time is a step closer to discovery.
An important but easy-to-use tool for uncertainty quantification every data scientist should know. Continue reading on Towards Data Science »
Photo by Claudio Schwarz A Data Scientist’s Guide to Longitudinal Experiments for Personalization Programs Unlocking rapid “test-and-learn” and capturing full-scaled personalization value from longitudinal experimentation A/B testing vs. longitudinal experiment Experimentation does not always need to be complex; simple A/B test
Opinion An analysis of Google’s unique approach to AI hardware Photo by Dollar Gill on Unsplash Nvidia’s stock price has skyrocketed because of its GPU’s dominance in the AI hardware market. However, at the same time, TPUs, well-known AI hardware from Google,
A nonlinear correlation measure for your everyday tasks Image created by author with recraft.ai Traditional correlation coefficients such as Pearson ρ, Spearman, or Kendall’s τ are limited to finding linear or monotonic relationships and struggle to identify more complex association structures.
A comparative analysis of AI with the biological brain Continue reading on Towards Data Science »
In my previous articles, I wrote about using Knowledge Graphs in conjunction with RAGs and how Graph techniques can be used for Adaptive… Continue reading on Towards Data Science »
What is type widening and why does it matter? Continue reading on Towards Data Science »
Tips on how to approach studying and practising data science Continue reading on Towards Data Science »
Why I think smaller open source foundation models have already begun replacing proprietary models by providers, such as OpenAI, in… Continue reading on Towards Data Science »
Why handling negative values should be a cinch Photo by Osman Rana on Unsplash Many models are sensitive to outliers, such as linear regression, k-nearest neighbor, and ARIMA. Machine learning algorithms suffer from over-fitting and may not generalize well in the presence
Recent Comments