In the realm of data science and educational innovation, Sebastián Flores stands out as a prominent figure who has made a lasting impact. Serving as the Chief Data Officer at uPlanner, Sebastián has been a pioneer at the crossroads of technology and higher education. His career is a testament to his profound passion for education and his adeptness at harnessing the power of data science for its metamorphosis.
Through this interview, we will delve into his remarkable career, gain valuable insights into education in the digital era, and explore the key trends he envisions for the future of data science in the education sector.
Sebastian not only excels in data management and analysis within the educational domain but is also a proponent of continuous training and adaptation to emerging technologies. His extensive experience at the intersection of academia and industry, coupled with his emphasis on hands-on learning and meticulous attention to detail, positions him as an invaluable fount of knowledge and perspective for those keen on navigating the ever-evolving landscape of data science and higher education.
Tell us a little about your experience and career in the field of higher education. How did you start your career in this field?
I started my career in Education because I have always been passionate about it. I’ve had the opportunity to study in different places and learn about various forms of education. I studied mathematical civil engineering at the Universidad Santa María in Chile, then applied to a double diploma program in France, where I delved into computational mechanics topics. Later, I went to Stanford to pursue a master’s degree in computational mathematics. After finishing my studies, I worked on academic and industry projects related to mathematics and modeling.
Sebastian with his classmates at Stanford University.
You mentioned that you were involved in mathematical modeling projects and also in teaching. Can you share details about your experience at the intersection of academia and industry?
After completing my initial two diplomas, I joined the Mathematical Modeling Center at the University of Chile, where I worked on projects related to the mining industry. Subsequently, at UTFSM, I focused on modeling tsunami propagation. During my tenure at Universidad Santa Maria, I led a redesign of a course on the application of mathematics in engineering, placing emphasis on practical applications and utilizing Python to solve data-related problems.
How did you transition into the world of data science and what led you to consider a position as head of the mathematics department at a company, in this case, uPlanner?
My transition to data science happened when a colleague shared with me a job offer to be the head of the mathematics department at uPlanner. This offer caught my attention since, at that time, it was not common for companies to have mathematics departments or focus on data analysis. I became interested, and after some conversations, I started working at uPlanner. I appreciated the trust they gave me, as well as the opportunity to develop a data science department.
You mentioned that the data science industry has evolved in recent years. Can you talk to us about how your work in this field has changed and what the most notable challenges have been?
The data science industry has evolved significantly. What we did 8 years ago is very different from what we do today. The challenges have changed, and technology has advanced. In that sense, it has been a journey of constant learning and adaptation to new techniques and technologies.
What was the experience of studying a Master’s degree at Stanford like and how did it shape you as a professional?
One of the reasons that led me to choose the Stanford Program over other programs was its location in the heart of Silicon Valley, with all that this implies. The course offerings were very diverse and came from different departments, promoting a truly interdisciplinary program. Furthermore, the quality of the professors was surprising; some of them had worked with prominent figures such as Steve Jobs, especially in the computer science department. We even had renowned professors like Andrew Ng, a leading figure in artificial intelligence and machine learning.
It was a privilege and a tremendous opportunity to take courses on topics I didn’t know about. Furthermore, I realized that the training I had received in Chile prepared us very well to face these types of challenges. While I can’t say the program was easy at all, it wasn’t impossible. Meeting the program requirements was achievable, especially if you consistently completed tasks and projects weekly.
After my experience at Stanford, I also had the opportunity to intern at a company in Silicon Valley called Lexity. It was an extremely valuable experience as we were early in the data science boom. In this company, we used data from e-commerce users to suggest purchasing actions and marketing strategies. This was very interesting and allowed me to see the potential of data science in the real world.
I think I was lucky to be in the right place at the right time, right when a paradigm shift was occurring in the way in which available data and tools were being used. It was an opportunity to put the knowledge acquired into practice and understand its value in the business world.
You mentioned that you had redesigned a course as a teacher. How do you think the methodology for teaching data science has changed? And how do you think the training of talented young people in this field should be?
Yes, this is a very valuable, and at the same time, challenging question to answer. Let me share the approach I took in my course. I think that there are certain types of learning that can only be generated experientially.
For example, I can talk to a group of students about the importance of reviewing data quality, but experiencing it is different. Having performed a data analysis without paying attention to the quality of the data, and therefore obtaining completely inconsistent results without realizing it because the data had not been validated, is a lesson only learned when experienced firsthand.
In my courses, I tend to take a highly experimental and hands-on approach. For example, in the Mathematics Applications in Engineering course, we alternated between theoretical classes and labs. In the labs, we provided students with a Jupyter notebook, a novel format at the time, one that combines explanatory text with numerical code. Students had to follow the notebook and complete parts of the code, which was a real-time challenge.
My goal was for students to immerse themselves in the subject and experience real-world challenges. Through workshops and labs designed around specific concepts, they learned hands-on. Personally, I am not a fan of large assessments in the form of theory exams, as they differ significantly from the daily tasks of professionals.
In professional life, you don’t spend two hours sitting in front of a blank paper solving exercises. It is a process of dedication, research, questioning, and a deep understanding of what the client needs. In my classes, I tried to convey this reality and make students live the experience.
Encouraging experimentation and real-world problem-solving was key. Throughout the editions of the course, I saw some of my students take what they learned and apply it in their careers. Some of them joined uPlanner as interns and then stayed working as data engineers. One of them even went to do a doctorate abroad.
It was gratifying to see the learning cycle closing, from being a teacher to seeing my former students grow and face new challenges in other companies. Overall, I consider this course to have been a successful experiment in that regard.
Tell us about your experience as Chief Data Officer at uPlanner. What are your main responsibilities and notable projects in your current role?
I have focused my efforts on empowering teams and fostering their growth. Previously, I played a direct role in leading the Data Science team and managing day-to-day affairs with team members. Today, the leadership of Data Science is led by Camila Diaz, and Data Scientists are part of the product development teams. Additionally, I have been recently supporting the Data Service team and working closely with Bastian, who leads the Data Service. My role seeks to establish strong governance when it comes to data. This ranges from the quality of the data that enters our databases to how it is used in our products through Data Science algorithms.
What makes our situation even more challenging is that the type of work we do at uPlanner is quite particular and often difficult to describe to people outside our industry. Compared to other industries where input data is usually highly normalized and of high quality due to semi-automated acquisition processes, we work with direct data from universities and consultants. This data often contains a large amount of noise, gaps, inconsistencies, and errors.
Therefore, it is not the typical job in the industry. We are tasked with working with challenging data while also helping institutions improve the quality of the data they generate. In summary, my current role involves supporting the Data Service and Data Science teams in their critical and strategic processes, as well as leading key projects with strategic clients where my experience can make a difference in specific tasks of those projects.
As you mentioned, at uPlanner we do a particular job with data, and taking this into account, what advice or recommendations would you offer to young professionals who want to be successful in the field of data science?
There are two fundamental capabilities that are crucial for a Data Scientist or Data Engineer. First, there is the ability to constantly learn. This goes beyond simple passive learning. It’s about having the mindset of actively and proactively learning. A Data Scientist or Data Engineer must be curious and willing to acquire new knowledge without being explicitly asked to do so. This includes learning new tools, methodologies, or programming languages, even if at the time it is not known whether they will be useful in the future.
Secondly, there is critical ability and attention to detail. This is a more difficult skill to teach, as some people have it naturally, while others do not. It is necessary to be critical to distinguish when something is not right in the data or in the analysis. It’s not just about completing tasks on a list, but rather identifying new questions and challenges as you progress through a project. This critical capability is essential to understanding customer needs and providing effective solutions.
These two capabilities, constant learning, and critical thinking, are essential in the field of Data Science. Although they can be developed over time, it is beneficial if Data Scientists already possess these skills from the beginning, as they are difficult to teach directly. Working in an environment where constant learning is encouraged and attention to detail is valued can help cultivate these skills in the team.
How do you think universities should adapt to these new technological and data tools that are appearing?
We need to consider the necessity for higher education institutions to adapt and acknowledge that students will use these tools such as artificial intelligence or ChatGPT to complete tasks or respond to a project. And I believe there is a very significant need for institutions to train their teachers in this, right? It’s naive to think that students are not using these publicly available tools. It’s like thinking that students weren’t Googling assignment statements to see if there were any online answers, and many times it’s even the teachers’ fault because they use the same examples year after year.
On one hand, we need to educate teachers about the existence of these technologies and how they should design tasks that accommodate these tools. Perhaps one of the best examples I encountered, as explained by an instructor, was that for a particular task, he instructed his students to write the instructions of a particular task on ChatGPT. Then they copied the response and based on that, the students added elements based on their own critical thinking and knowledge to the answer.
In this sense, it’s important to remember that jobs aren’t being automated; specific tasks within a job are. I can automate tasks like dictation, but the role of a secretary has not disappeared. This is because there are tasks that can be automated in that profession, but it doesn’t mean the profession itself vanishes. Hence, professions will be enhanced with these tools because we save time, and it’s crucial that the individuals being trained are quick learners, digital natives who can apply all these technologies.
What is your vision for the future of data science in higher education and the industry as a whole? What are the key trends you anticipate in this field?
I think completely replacing traditional universities is a very difficult task, and I actually don’t think it’s desirable. Universities have a long history of being an effective learning mechanism that has endured for centuries. They have an intrinsic value in the training of students that goes beyond the transmission of knowledge.
New technologies and teaching methods, such as online courses and distance education, are valuable additions that can provide flexibility and access to education to a broader audience. However, they cannot completely replace the learning experience offered by traditional universities.
Direct interaction with teachers and classmates, the possibility of doing practical exercises in the classroom, real-time problem-solving, and the opportunity to develop communication and collaboration skills are essential aspects of education that are not easy to replicate in an online environment.
Instead of seeing it as a competition, I believe that new ways of teaching can coexist and enrich the educational landscape. Technology can provide students with additional options and flexibility to learn based on their individual needs and circumstances.