Over the last ten years medical science has witnessed an explosion in the quantity of molecular data available to researchers primarily driven by advances in high throughput technologies. This deluge of data has generated unprecedented opportunities for both basic and translational research, but has also yielded its fair share of challenges especially given that data analysis methods have often lagged well behind technological advances. In this session I will share some of my experiences with big data, predominantly from the perspective of using genetics and systems biology to understand the biological basis of complex traits and diseases. I will discuss the rapid move in human genetics from candidate gene experiments to genome-wide association studies, and most recently, to whole genome sequencing. I will discuss my experiences with high throughput technologies that have targeted the transcriptome, methylome, metabolome and microbiome. Central to my discussion will be an illustration of how genetics can be used to inform not only the aetiology of complex diseases and traits, but also to explore biological networks in systems biology and infer causal relationships between the different variables. Finally I will discuss some of the inherent limitations of high throughput technologies and the importance of detailed functional work in characterizing and following up the insights generated by “big data”. I will give my opinion on some of the most exciting developments to come out of my field, some of the challenges associated with making best use of high throughput data, and finally a few tentative solutions as to how I think these data can be best utilized in the rapidly advancing fields of genomics and personalized medicine.