How I Leveraged the Pandemic as an Opportunity for Personal Growth

Brian G Herbert
16 min readJan 13, 2021

--

Both the threat of illness or death from Covid-19 and the ongoing restrictions and disruption to normal life from the pandemic can lead to feelings of frustration and powerlessness. By March I was collecting and analyzing data on the pandemic and building analysis apps in Python as part of a personal mission to build up my competence in data science. In the course of my journey I developed two personal themes:

· Be an active rather than passive consumer of information about the world around me. Being able to locate and extract pertinent data and properly analyze and interpret it is empowering and provides me with a powerful B.S. detection system! Across politics, news media, social media, and search there are many forces at work not all interested in providing unbiased, objective information. It has never been more important for individual citizens to have tools to analyze source information and interpret it in terms of their individual interests. To become a data scientist to me is to be an informed citizen in our increasingly online world.

· I am fascinated with not just Artificial Neural Networks but Biological Neural Networks as well. I read about Neuroscience, and recent developments in Neuroplasticity indicate we can master new skills, adapt due to brain or bodily injury, and take on new challenges much longer into adult life than previously thought possible. After decades of work I wanted to drive my career into Big Data/ML/AI, the most exciting technology wave since I worked on the original mobile phone systems in the early 90’s! I initiated a sustained and immersive journey during which I’d often spend 12+ hours a day researching areas for analysis, writing code, debugging, and asking questions online when I got stumped. Many people work long hours like that in their jobs- but few do it as part of an unpaid, self-inflicted effort! Eventually, I didn’t have to translate into python, I thought in python, and writing the next method or object class or package became a more fluid process.

My career has been in software development within the Telecommunications industry. I have held a wide range of roles over three decades designing, developing, implementing, selling, and marketing the systems used to run a telecom provider’s business and interface to the network. In recent years I felt like Sisyphus at times, working on yet another enterprise systems project which was on a death march and behind schedule, and tension between stakeholders was escalating. Although it is exactly the type of situation that requires breaking out of the status quo and innovating in the interest of achieving a breakthrough, often the organizational psychology at work prevents that from occurring.

My greatest successes have come from taking risk when I felt I had insight to break from the status quo and apply my ideas. I call this my path to innovation if I’m feeling bold! Whatever I call it, I know that my ability to do something better and different has always depended on developing better and more extensive knowledge than my competition for a job, for project resources, for the ear of management, or to close a sale with a client. Once upon a time, I used to read through printouts of source code in the evenings from my company’s industry-leading mobile phone service billing system. OK, that was a bit extreme and tedious, but it provided me a deep foundation that added value to my work for years. Had I continued working in my past line of work instead of committing to this personal data science mission, I would have sacrificed bandwidth that I needed to acquire the baseline knowledge that I had in my sights. That is a long explanation for why I needed to make a clean break, throw myself into my data science journey, and not look back!

As I became aware of the global impact of Covid-19 in March of 2020 I had been pushing myself through a personal journey with Data Science. I had recently completed two Emory University certificate programs in data science (bigdata/analytics and machine learning/python). While I learned a lot at the level of descriptive and predictive models and ML and AI concepts, I was left with many ‘known unknowns’. I knew to become a real practitioner I had to fight through the gritty details of coding, debugging, and deploying solutions. I knew I’d make mistakes and create a fair amount of throwaway work- I’d have to bear the cost of my investment in climbing the curve.

I had concepts for analytics apps in a few business areas on which I was in the formative stages of writing code when covid-19 hit. I quickly realized a better ‘killer app’ for my journey was to analyze data related to the pandemic. I have now added several posts on my blog, The Analytics of Value, describing my 2020 journey with climbing the Python and analytics app learning curves.

Earlier in my career I wrote code but in recent years I have been in product management, project oversight and consulting. My career had taken me in and out of business intelligence and analytics initiatives and recent experience convinced me of the importance of machine learning and artificial intelligence. Over ten years ago I won a telecom industry award for Innovation on a Business Intelligence Implementation I directed at a major client, but I didn’t remain specialized with B.I. At the time there seemed to be more job security in being a generalist, but with the growth of ML/AI and the complexity of the field, I decided unless I immerse myself and commit to it, I might only be able to work on the perimeter of data science and not be seen as a credible, strategic resource. So, I created a mid-career ‘sabbatical’, of which the past 8 months have been focused on building my competence with Python to analyze the pandemic.

Writing python code has not been my only major activity during this period. I remodeled and flipped a house for profit, I was active in the lives of my two teenage kids, and I pursued health and fitness goals. This past fall I also moved back to Colorado after having lived in the Atlanta area for the past 12 years. During my sabbatical I achieved the life balance that was often a struggle during my earlier career. I’m more optimistic and friendly these days, now I simply need to maintain as I return to work for someone else! Perhaps I reveal too much here, and I’m too honest about the challenges of finding happiness and fulfilment in a tech career these days, but my attitude has always been that we only get one life and thus one chance to speak honestly and candidly.

The time off gave me flexible blocks of time to deep dive on data science topics and read books on a range of Machine Learning, Artificial Intelligence, Modeling, and Algorithm subjects. Having more than the typical hour or two in the evening after work allowed me to explore complementary topics like neuroscience and behavioral economics. On the technical side I had time to compare development tools, study python architecture, explore data validation and etl-task reuse, build fault tolerance feature, and performance tune my apps. Under the typical death march project deadline, it is impossible to carve out time for those types of activities!

Often on a project deadline, it’s impossible to find time to build complementary knowledge or explore promising alternatives, with the pressure to take the shortest path to the deliverable. However, what I’ve learned from my career is that when I have invested in more than just ‘straight-line’ competence in my work tasks, supplementing my skills with deeper and wider knowledge than seems strictly necessary to do my job, I have been better at innovation than my peers and I’ve been able to provide my employer with competitive advantage. With the immediate deliverables it can seem like taking a step back prior to taking multiple steps forward, so it takes a high level of trust and collaboration with stakeholders to invest in a break from the status quo. The past eight months have allowed me to pursue my work with this freedom to explore best practices or apply my own A/B testing. Whether I now pursue entrepreneurial ideas or put my energy to work for an employer, I built the robust baseline knowledge that I sought after earning my certificates in data science.

Moving from classroom to ‘real-world’ applications requires a change in approach, and for me the first task was selecting and configuring my development environment. Anaconda’s Jupyter Notebooks is great for interactive work on a high-level problem, such as when working an exercise jointly between teacher and student in class. As I started conceiving and coding my own end to end analysis of various pandemic topics, the lines of code in each app grew from a few dozen to several hundred and I began writing and importing self-defined modules as well as integrating more and more 3rd party modules. Also, rather than the type of ad-hoc, step by step analysis done in a classroom, I was writing ‘apps’, which is to say an executable process where integrate data importing, wrangling, integration, analysis and production of charts and map plots into a seamless execution. Such an app is difficult to manage with Jupyter Notebooks, I wanted more of an integrated development environment like I was familiar with for Java or C++, Like an Eclipse or NetBeans but built for Python.

My project has been primarily concerned with the mechanics of producing end-to-end analyses, where the first step is pulling in updated raw data and doing data wrangling to validate, clean, format, and integrate it. As wrangling tasks take up so much brute force effort on data science projects, it made sense to focus on identifying and coding any reusable tasks. Had I focused on applying a neural network model my work may have turned more heads, but I focused on areas with the greatest potential gain in productivity. If I want to be able to spend more time on the cool stuff, the Bayesian models or Convolutional Neural Nets, then why not first find ways to maximize the reuse and automation of tedious, mind-numbing work with raw data? Some firms solve this by hiring out cheap labor via Amazon’s Mechanical Turk, and to be fair some of the tasks performed there are not strictly ‘wrangling’, but my point is I was doing things end to end with only my own labor input so I sought to improve my productivity wherever possible. There was a also a conceptual shift that I needed to learn for my approach to Python versus my dated experience coding in languages like Java or C++. For instance, iteration is almost always sub-optimal and many vector operations are very efficient in Python, so I often had to refactor my early work as I learned to write more efficient code. This is a great example of an area that is hard to teach in the classroom and can only be learned through experience with real projects.

To improve my development environment, I focused on how to raise productivity with the code-test-debug cycle. As I climbed the learning curve, I did a lot of refactoring so I wanted an IDE which would support that task. I also wanted to increase the reuse of my code across my projects. One of the first things I did was define evaluation criteria such as reliability, extensibility, and scalability and I assessed 6 different python development environments. I found what I was looking for in PyCharm Professional from JetBrains. If I hadn’t been on a dedicated sabbatical in which I had committed months to building out my skills with python dev., I wouldn’t have spent 2 weeks setting up an evaluation and testing different development tools. I came out of it that much deeper and broader, as I have with other deep-dives on this journey. Again, just my philosophy but often tight deadlines can prevent us from investing in knowledge or evaluating options which have long-term value. My Python Journey has cost me personal savings, but I have no doubt I will look back on it and see a highly positive payoff.

I’ve written several blog posts on my site, The Analytics of Value on Blogger.com in which I drill down on data sources, wrangling and fault tolerance, gaining domain knowledge, the context and relevance of different types of measures, and other things I’ve learned during the course of my personal project with covid-19 analysis.

Since I focused exclusively on building apps to analyze Covid-19 data, I also had a domain knowledge mountain to climb: public health and epidemiology. Part of this was the technical side of getting to know data sources, which are a result of business processes. For example, to understand the accuracy of mortality data I needed to understand the process for completing and reporting death certificates, the codes and standards used for the forms, the service levels or timeframes, and how they are rolled up from a local medical examiner to county, state, and federal/CDC levels. I wanted to develop a context for covid-19 relative to other historic causes of death, and I performed ad-hoc analysis using the CDC’s Wonder database. I learned about population and demographic data and resources from the Census Bureau. Then on the medical side, there were many aspects of public health and epidemiology I needed to learn to make sense of the data I was analyzing. I became familiar with scientific research available on The Lancet and even pre-review covid-19 research available on Medrxiv. My Mom was a registered nurse, so I grew up with some exposure to medical terms and practices, but it was very different from the telecommunications industry in which I had spent my career!

Daily Positive % versus Daily Test Volume

There were times when I wondered why I was putting myself through it (and paying for it on my own dime)! I worried how recruiters would view the time I spent climbing this learning curve, particularly since I’ve done it mid-career and some might view it as a step backward! I truly did not know it would be worth it until a month or two ago as I got a new app debugged and running, one which integrated choropleth plotting on a county-level map of the U.S., using derived ratios which I had identified on my own as providing a better context than I had seen to understand local community trending. It had quite a few steps to it, all of which I automated, and I put it together at a fluid pace without getting stumped. This was in great contrast to when I set out on my journey and had to query StackExchange and other online forums or read user guides for Python modules at every turn to figure out how to do something! Perhaps an analogy would be that I was now riding the bike to a destination instead of continually falling over, then fixing some bike component, then starting over and falling again a few feet farther! Maybe my expectations for life are too high, it is difficult to have a job and other life activities which provide such a rewarding experience, but this journey I am describing has had positive aspects on my optimism and sense of hope, all during a period when events could have instead turned me negative and frustrated. Reinventing ourselves is not a path without difficult challenges and personal tests, but it is clear that we need to reinvent ourselves multiple times throughout our career to remain relevant, challenged, and energized.

Being honest with myself about what it would take to reach the competency that I demanded of myself, then setting out on a year-long journey. These days it seems more common that people lay responsibility on others for their situation, there is a lot of condescending judgement and scapegoating. I feel we need to raise our awareness across society as to the negative influence of social media and how the impersonal nature of online interaction can lead to both dehumanizing others and taking less responsibility for one’s own actions. I try to force myself to be brutally honest about my skills, bias and behavior, I call it regular introspection. This introspection is what led me to push myself through months of hard-core development sessions and fight through bugs and refactoring my faulty initial designs. I still have a lot to learn with data science, but the time I spent now allows me to know I have more than some paper degrees, I did not skip past the hard, tedious work. I came out of this journey more resourceful and I’ve improved my approach to problem-solving

I am now seeking a return to technical product management but where ML/AI is a core component of the product or service offering. I don’t expect to be writing python code on a daily basis, but I think there are great benefits to the deep dive I did. A benefit unrelated to my career but very important is to not be a passive consumer of news on current events and topics, but to have the skills to develop my own, accurate understanding of any subject. With my analysis of Covid-19, It made living through this pandemic less frustrating. I could generate at a moment’s notice a more detailed and timely analysis than what I saw on the news, that has been empowering! I feel I’ve run my course with analyzing Covid-19 and I am now working on new analysis ideas in areas of business and technology, but with the press of a button I can generate updated plots with current data at any time. I’ll always have this work in my portfolio, and I may find ways to reuse some of the code for my new work.

Gathering and validating source data, running analyses to get a feel for the data, testing hypotheses and interpreting results aren’t just for data science- they are skills vital to being an engaged citizen these days. We have transparency, honesty and bias issues across politics and the media. The objectivity of content provided by social media and search platforms has been questioned. In addition, those platforms have been manipulated or hacked on occasion by domestic or foreign actors to push an agenda. I think it’s foolish and unhelpful to try to put a ‘good’ or ‘bad’ label on tech providers or their platforms, what we need is to mature our understanding of how content influences our psychology and sociology. In a way, we are still in our infancy in understanding the impact of our growing dependence on an online world, it only makes sense that as our knowledge matures, we will have to adjust laws, regulations, and even public education to control the risks or downside. Most of us are concerned about polarization of issues in our society, and the role online sites, platforms and algorithms may have in that.

I have a bachelor’s degree in Psychology (also an MBA), and it has led me to a long-term interest in fields like Behavioral Economics and Organizational Psychology. Long ago I did a senior research project related to intentional and unintentional biasing of memory recall. Nowadays the term confirmation bias is better known, I was fortunate enough to know from my undergraduate years that subtle variations in content can lead to big swings in memory recall or opinions. In our online world, serving up content which fits with a previously expressed preference triggers a confirmation bias-type reaction, and it increases the likelihood that subsequent similar information will be accepted without question by the viewer. Having studied and even experimented with the psychology of influence, I know that the potential is there for misuse. It certainly isn’t limited to online content, sometimes this psychology is used to unduly influence witnesses or juries in court cases, but I think it is preferrable for tech firms to establish positive policies and controls as opposed to waiting for incidents to occur which trigger lots of regulation. Separate from my skills development, this is an aspect of my data science journey in which I have had a growing interest, and more and more research is emerging to help us mature our understanding of online bias, influence, and objectivity.

Objectivity versus Bias and Lifelong Learning

At the start of this article I mentioned two big lessons, or perhaps themes, for my year-long Python development journey. They are:

· Building skills and personal habits to be objective and analytical with all my decisions, and being able to resist forces like confirmation bias which distort our decisions

· Leveraging Neuroplasticity to be an effective lifelong learner and continue to take on new challenges and learn new things well into my adult life

Gathering and validating source data, running analyses to get a feel for the data, testing hypotheses and interpreting results aren’t just for data science- they are skills vital to being an engaged citizen these days. We have transparency, honesty and bias issues with politics, the news media, social media, and even search results. We’ve had incidents where domestic or foreign actors have infiltrated platforms and sought to manipulate the opinions of citizens to pursue an agenda. Concerned citizens need to be vigilant, and there is no better vigilance than to put the tools for identifying truth in one’s own hands.

Is a claim a documented fact, an opinion, or an unfounded assumption? Money is lost, illness and deaths occur, and conflicts arise often due to invalid assumptions, erroneous analysis, and confirmation bias. Keeping ego and emotions out of analysis is the essential first step in being fair and objective. This personal project has been about so much more than improving my vocational skills, it has also been about building my skills to be resourceful in assessing issues and topics in the world around me.

What about the Neuroplasticity aspect to my project? I have been able to bring about changes to my ‘intuition’ via over a year of working a systematic process of validating or disproving claims with big data analytics. I have no way of examining my brain’s adaptation, but I feel every time we push ourselves to build new skills, and engage in introspection and replace old patterns with better ones via repetitive action, we create new synaptic associations and extend our quality of life. At times, our world seems so chaotic and so many elements seem so far out of our control, but we always have personal choice and the option of engaging in self-improvement. No matter what I do with my data science skills from here forward, focusing on what I can do to improve myself and grow during this frustrating period of pandemic provided me with a winning pattern that I can use for the rest of my life.

Please visit my blog, The Analytics of Value for detailed posts on my python project with Covid-19 and other posts related to my business, technology, and life interests! My LinkedIN profile is Brian Galindo Herbert and my twitter handles are BrianGalindoH and bgherbert.

--

--

Brian G Herbert
Brian G Herbert

Written by Brian G Herbert

Award-winning Product Manager & Solution Architect for new concepts and ventures . MBA, BA-Psychology, Certificates in Machine Learning & BigData Analytics

No responses yet