Notifications

No notifications
We will send an invite after you submit!

Memories & condolences

Year (Optional)
Location (Optional)
Caption
YouTube/Facebook/Vimeo Link
Caption
Who is in this photo?
Or start with a template for inspiration
Cancel
By posting this memory, you agree to our Terms of Service and Privacy Notice.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

I met David moments before his passing. I was the last person on Earth who had a chance to know him. He told me about growing up in Arizona. The impression I got was that he was a kind person. Same age as me. I saw myself in him. 

These moments with David are something that I will carry with me for the rest of my life. His death shook me to my core. Reading about him here, I know that he is someone that I would have wanted as a friend.

I am so sorry for your loss. I still think of that day and wish that I could have saved him. 

Two more of David's articles from his 2023 blog. These involved some heavy duty math, which was normal for him, but pretty much Greek to me. 

 Article 3:  written 5/23/23          How to Implement the Metropolis-Hastings Algorithm 

When I create models for ranking sports teams and predicting outcomes of games, one of my favorite methods for fitting the model is the Metropolis-Hastings algorithm. I learned the MH algorithm back when I was in college when I used it on a project looking at star-forming regions. In hindsight, using MH was completely unnecessary for what we were doing for that work (fitting a line to a scatter plot), but I have since used it for a variety of more complicated problems.

So, what is the Metropolis-Hastings Algorithm?

In simplest terms, the Metropolis-Hastings algorithm is a way to fit a model to data. Rather than just providing the best fit values for the model, it creates a sample of the parameter space that can be used to estimate the distribution of the parameters and the correlation between them. This is one of the main reasons to use this rather than a simpler method and it is particularly useful when the distribution of the parameters is not normally distributed.

The Metropolis-Hastings algorithm is an example of a Markov Chain Monte Carlo (MCMC) technique. The way MH samples the parameter space is by starting at some location in the parameter space (called a state), then jumping to nearby locations in that space. This sounds like a random walk, but in contrast to a true random walk, the MH algorithm will randomly select a new state, but then reject it if the new parameter values fit the data significantly worse than the current values (I will give the exact criteria for rejecting a jump in the next section). The result is that as the algorithm performs this semi-random walk, it tends to prefer states that provide better fits to the data, while still allowing for the sampling of lower likelihood states.

How does the Metropolis-Hastings Algorithm Work?

Say you have some data and you want to fit a model to it that has multiple parameters, maybe even lots and lots of parameters. The model will calculate the likelihood of the data given a choice of parameter values (a state).

One of the other reasons to use the MH algorithm is because the model contains a large number of parameters. If you are fitting a line where you only have two parameters (slope and intercept), then you can easily just do a grid search of the parameter space. However, if you have dozens or even hundreds of parameters, then it is no longer feasible to search the entire parameter space that way. The MH algorithm does not have this limitation.

The MH algorithm is iterative. Each iteration, it will start at a state in parameter space, propose a new state to jump to, and either accept or reject that new state. The new state is randomly selected from a proposal function:

A typical choice for the proposal function is a Gaussian since it is symmetric and easy to implement, but there are some fancier choices that can make the algorithm more efficient.

After the proposal function creates a new state, the algorithm has to determine whether to accept or reject it. In general, the algorithm will accept new proposals if they provide a better fit to the data, but it won’t necessarily reject if the fit is worse. To determine whether to accept or reject, the following ratio is calculated, then compared to a uniform random number between 0 and 1.  If the proposal function is symmetric, then this ratio is always 1. In fact, this part of the algorithm is the Hastings part; without it, we just have the Metropolis algorithm.

The criteria for accepting or rejecting the proposed state is whether the ratio of likelihoods is greater than or less than a uniform random number (between 0 and 1), which is called u below.

Put more simply, if the new state has a higher likelihood than the old one, the new state is accepted and is used as the starting point for the next iteration. If the new state has a lower likelihood, then it can still be accepted if the ratio is larger than the uniform random number. If it isn’t, the new state is rejected and the next iteration will start at the previous state. And that’s it. The algorithm repeats this process, collecting samples until there is enough to estimate the distribution of the parameters. Seems easy, right? There are a couple things to be careful about.

Optimizing the Acceptance Rate

The choice of the proposal function has a major effect on the efficiency of the algorithm. If the proposal function proposes states that are too far away, it will tend to propose too many low likelihood states, leading to a very high rejection rate. The algorithm will get stuck in the same place for many iterations in a row, causing the random walk to sample very slowly. If the proposal function proposes very small jumps, then the acceptance rate will be high (close to 100%), but it will still sample the space slowly simply because the jumps are small. There is an optimal middle ground where the jumps are large enough to get around the parameter space without reducing the acceptance rate too much. Studies have shown that, as the number of parameters grows, the optimal acceptance rate is about 23%. It is not important to hit this exact acceptance rate, accepting about 1 out of 4 proposed jumps will lead to better sampling.

Autocorrelation

Even when sampling the parameter space most efficiently, the MH algorithm will still reject states more often than it accepts them. This means that the state may remain the same for several iterations. If the state is recorded for each iteration, then nearby points will be highly correlated rather than being independent samples of the parameter distribution. This correlation between samples is called autocorrelation. This can be avoided by not recording the state each iteration. In fact, it is usually correct to only record the state every hundred or even thousand iterations. This allows the algorithm to move through the parameter space enough that the samples are effectively independent.

Burn-in

The algorithm has to start at some state and you could choose any state you like. You might not know ahead of time what a reasonable starting point is; ideally, you would want to choose a state that has a high likelihood. The way around this is to simply let the algorithm run for a while without recording any samples. This process is called the burn-in. It is important to perform a burn-in because the states at the beginning won’t necessarily reflect the underlying parameter distribution you are trying to sample. After the burn-in, the algorithm will have automatically gravitated towards a better starting point.

Conclusion

This was a quick overview of how to implement the Metropolis-Hastings algorithm. While this description is hopefully enough for an interested person to write their own MH code, there is much more to learn like why the MH algorithm works, how to optimize the shape of the proposal function, and additions to the algorithm that make it more effective for complicated models and parameter distributions.

At some point, I hope to post about some of the models I have created for college basketball, college football, and professional soccer which showcase the Metropolis-Hastings algorithm. 

  Article 4:  written 6/10/23                          A Predictive Soccer Model 

I am a huge nerd. I am also a huge sports fan. One of my favorite hobbies is to be both at the same time by creating models to assess the performance of teams and predict the outcomes of future games. In this post, I am going to describe a model for modeling soccer goals. In future posts, I will add some more complexity to the model. Also, I am going to be using the word “soccer” instead of “football” because I live in the United States. If this bothers you, you can send your complaints to my Twitter* account.

*I am not on Twitter

For this post, I will be applying the model to the 2022-23 English Premier League season, even if that might be a little painful for me (I am an Arsenal fan).

The Model

A common practice when modeling the number of goals scored in a soccer match is to use a Poisson distribution. Rather than assuming a single expected value for each team, I assign an offensive coefficient, a, and a defensive coefficient, d, to each team. The expected number of goals is equal to the product of a and d.

The model also includes a parameter for home advantage, h. The expected value is multiplied by h for the home team’s expected goals and is divided by h for the away team’s expected goals. Typically, h is a number slightly greater than 1.

One way to obtain values for the model parameters (offense, defense, home advantage) is to find the Maximum Likelihood Estimation (MLE). This means I need to maximize the probability given by multiplying all the Poisson distributions for each game.

Finding the values of parameters that maximize this function requires taking a derivative with respect to each individual parameter, setting the derivative to 0, and solving. The problem is that the function above is made up of many, MANY terms being multiplied, so taking the derivative would mean using the product rule about a billion times and then dealing with a somewhat unwieldy function. A nice way around this is to take the log of both sides. This takes advantage of a nice property of logs: The log of a product equals the sum of the logs.

This property works with any logarithm. [Bob Ross voice]: I am partial to the natural log, but you can use any logarithm you like.

Now taking a derivative is a little easier since terms are being added rather than multiplied. The derivative with respect to team k‘s offensive coefficient is

The Kronecker delta is used to just select matches featuring team k. A similar equation exists for each team. Taking the derivative with respect to the defensive coefficients results in a very similar set of equations.

The left side of the equation is just the total number of goals scored by team k. The right side of the equation is the total number of goals predicted by the model. This shows that the MLE parameter values will be those that match the actual number of goals scored by each team. If we went through the same steps for the defensive parameters, we would find that the MLE values of those parameters would match the actual number of goals allowed by each team.

We can go through the same hullabaloo with the home advantage parameter. If we take the derivative of the log-likelihood and rearrange the equation, we get

The left side is the total home goals minus the total away goals. The right side is the predicted home goals minus the predicted away goals. The MLE value of h will make the real and predicted values of home minus away goals the same.

Finding Coefficients Iteratively

The English Premier League contains 20 teams. Since each team has two coefficients (offense and defense) and there is also a home advantage parameter, that means there are 41 parameters in the model. We want to find the values of each parameter so that the predicted number of goals scored and allowed by each team matches reality. This might seem daunting because we are solving a system of 41 equations with 41 variables and each equation includes both offensive and defensive coefficients. This means that a depends on d and vice versa.

Rather than actually solving the 41 equations simultaneously (which you could hypothetically do), I find the solutions iteratively. I start with a guess for each parameter (it doesn’t matter what I guess), then solve for the offensive coefficients given the values of the other coefficients. This particular model makes it especially easy to solve an equation for one iteration because the predicted number of goals is directly proportional to both the offensive and defensive coefficients. If a[k] is doubled, then the predicted number of goals for team k is also doubled.

As an example, let’s look at Arsenal (and try to forget how they blew their chance to win the league for the first time in almost two decades). Arsenal scored 88 goals throughout the season. If we use our initial guess of parameter values to predict the goals Arsenal will score, we get 71. We can make the predicted number of goals equal exactly 88 if we multiply Arsenal’s offensive coefficient by 88/71. We would also multiply each other team’s offensive coefficients by their (actual goals scored) / (predicted goals scored). After doing this, each team’s predicted goals scored will match the real goals scored. We would then do the same for the defensive coefficients. After multiplying each defensive coefficient by (actual goals allowed) / (predicted goals allowed), each team’s actual and predicted goals allowed will match.

We have to continue adjusting the parameters because when we adjust the defensive parameters of each team, the predicted goals scored by each team changes. This necessitates adjusting the offensive coefficients again which changes the predicted goals allowed. If we alternate between adjusting the offensive and defensive coefficients, eventually the values converge to the MLE solution. In actuality, we also need to adjust the home advantage parameter each iteration, as well.

In practice, I find that it takes on the order of 10s of iterations to converge to the MLE solution for a league containing 20 teams.

Results

After leading the league for most of the season, Arsenal slumped down the stretch and ended up losing the title by 5 points. It felt like the most Arsenal thing for them to do. ESPN even published an article on March 30 about how Arsenal fans were terrified of their team blowing the title race. Less than two weeks later, Arsenal began a stretch of four games that blew the title race. In two consecutive away matches (Liverpool and West Ham), the Gunners blew a 2-goal lead and settled for a draw. Then they had to scrape their way to a 3-3 draw at home against bottom-of-the-table Southampton. Finally, Manchester City themselves dealt a nearly fatal blow by winning 4-1 against Arsenal. Even if Arsenal were outmatched by City, they still would have been champions if they didn’t drop 6 points in 3 very winnable matches.

Now that I have vented, let’s look at the results of the model. First, the offensive and defensive parameters.

According to these numbers, Manchester City was the best offensive team (having Erling Haaland on your team helps with that). The best defensive team was Newcastle United (smaller defensive numbers = better defense).

An interesting feature of the model that I haven’t mentioned yet is that there isn’t actually just one MLE solution. If all the offensive coefficients were multiplied by the same factor and all the defensive coefficients were divided by that factor, then none of the model’s predictions would change. The results  were scaled so that the averages of the offensive and defensive coefficients were the same.

Manchester United: The red team from Manchester finished third in the league, but the model actually has them as the sixth best team. They ended up with 75 points, whereas the model says they should have expected about 62.4. The team that finished one place behind them, Newcastle United, scored 10 more goals and allowed 10 fewer goals, so the model has them about 12.5 points better than Man U. Newcastle’s issue is that they simply drew too many games. They ended up with just 14 draws, tied for most in the league.

Leicester City: The Foxes really struggled throughout the season and ended up being relegated. Even more painfully for their fans, the model shows that they probably should have been able to stay up. The model had Leicester City on 43.82 points, good enough for 14th place. They had a better goal difference than 4 of the 5 teams above them in the table, but weren’t able to make the most of their abilities. In fact, Leeds United was also relegated despite the model having them safely outside the bottom three. In addition to Southampton at the bottom, the model had Bournemouth and Wolverhampton Wanderers being relegated.

  I am going to look more closely at how well this model fit the actual results. I am also going to look at some other parameters that can make the model more accurate. In particular, the model presented here assumes that the probability of scoring a goal is constant throughout a match. Intuitively, we expect that to not be the case; we would expect a team that is trailing to adjust their tactics to emphasize scoring a goal. 

Yeah, pretty much Greek to me, but not to him.        Norm Schenck, David's dad

  During the year 2023, David took a very long and intense data analysis course at the Flatiron School. As one of the course requirements, David was required to generate a blog. In that blog, he wrote multiple articles. These two struck me the most.  I think his articles in his blog went above and beyond the course requirement.  

         

  First article:        Why the Rise of AI is Concerning             Aug 1, 2023

The past year has seen a meteoric rise in artificial intelligence (AI). The kind of technology that used to feel like it belongs in a sci-fi book has snuck up on us and it is questionable whether we are ready for the changes it will bring. While AI has great potential to revolutionize the world, it also creates challenges and potential disasters.

Note: I am not doing that thing where I am going to reveal at the end of this post that the whole thing was actually generated by ChatGPT or some other AI text generator. I can prove it: Potato, aardvark, gazebo, turquoise. New AI is too advanced to produce random non sequiturs. Old versions of text generators may have been prone to silly mistakes, but the state-of-the-art AI is so good at mimicking human text that it can convince someone it was written by a person. Famously, a former engineer at Google named Blake Lemoine claimed that the AI they were generating was sentient (despite knowing the mathematical underpinnings of the AI).

What are the Benefits of AI?

Before I get into the pitfalls of AI, I do want to talk about why there is reason to be optimistic. One of the main benefits of AI is an increase in efficiency. There are certain tasks that humans cannot do (or at least cannot do in a practical amount of time), but AI can do incredibly quickly. Applications range from the mundane, such as writing routine emails that people don’t want to deal with, to life-changing, such as developing gene technologies or helping find cures for cancer. If AI is used responsibly and with foresight, then it can revolutionize our society in many positive ways. Unfortunately, not every application of AI is going to be implemented responsibly and with foresight, potentially leading to the problems discussed below.

Artificial Intelligence is not Intelligent

First, I want to point out that I have been referring to technologies like ChatGPT as artificial intelligence, but it is important to state that there is currently no actual artificial intelligence in existence. ChatGPT is not intelligent. It does not make decisions the way you and I do (or the way that a dog or cat does, for that matter). The technology that we call AI are nothing more than models that search for patterns in the data they are fed and then recreate those patterns. The reason they are performing so much better than in the past is partly because computing capabilities have improved enough to utilize tons of data to create highly complex models.

Maybe someday, an actual artificial intelligence will be created. That would be exciting because it could shed light on how our own minds work and what consciousness is. For the time being, we have fake AI. The lack of intelligence is just one of the reasons many people are concerned.

Problems with AI

No Intelligence = No Ethics

Since AI is not actually intelligent, that means it is not actually making decisions when it produces some output. This means that the AI is not able to ask itself questions like, “who might be harmed if I do X?”. In some applications, this might not seem like a big deal, but it could be a huge deal in other contexts. What if AI is tasked with determining who gets a kidney transplant or whether someone is eligible for health insurance? I do not want life or death decisions to be made by something that doesn’t know what death is.

AI is not Unbiased

One misconception about computer models and AI is that they are unbiased because they don’t have opinions or desires the way a person does. While an AI is not going to intentionally harm one group because it feels like it (since AI does not feel anything), that does not mean it is free from bias. The technology is only as good as the data that is fed to it. If the data is biased, then the AI will be, too. One example of this that has already happened is face-detection technology that works much better for white people than for anyone else simply because the model was trained primarily on white faces. Text generators that are trained on data scraped from the internet will reflect our prejudices back at us if no attempt is made to filter the data. If you feed racist, sexist, and homophobic data to the model, it will return the same.

AI Can Make Tons of Money…Just Probably Not for You

As someone who is currently studying data science so that I can hopefully make it my career, I am personally quite concerned about how AI will affect tech jobs. There are already efforts being made by some companies to use AI in place of human workers. If these efforts are effective, the result will be that companies will make more money because of increased efficiency at the expense of the livelihood of many workers. AI is just one more way in which the rich can get richer. Also, since implementing AI is expensive, bigger companies are going to have a significant advantage. The result could be that smaller businesses will find it even harder to compete in the global economy.

Education Needs to Change Fast (and it is Bad at Doing That)

I spent the last five years as a high school math teacher (Aug of '18 to May of '23). During the 2020-21 school year, the pandemic put us in a position where we had to teach one full semester completely remotely, then another semester with half the students in the classroom and the other half learning from home. Cheating was rampant during this time since we could not monitor the students who were not in the classroom. For math specifically, I learned that many students were using apps on their phones that could essentially do the problems for them. The effect was that many students were light-years behind where they were supposed to be when they did finally return to in-person schooling.

That scenario is now playing out on an even larger scale due to AI. It is now possible for students to get a full essay on the topic of their choice, even specifying what level of work to produce, from ChatGPT. English teachers are faced with a Turing test every single time they grade a student’s written work.

If our education system wants to achieve its aim of actually educating people, it has to make drastic changes. It needs to rethink which assignments need to be done at school vs. at home. It needs to rethink what skills students need to learn so they can utilize their knowledge later in life. My worry is that education does not do a great job of changing with the times in general. Now we are asking it to change in response to a complex, quickly evolving technology.

Not Only Does AI Make Plagiarizing Easier for Students, It Plagiarizes By Itself

AI is not just being used to generate text, but also digital art and music. In order for an AI to function, it must be given data. This means every piece of art or music generated by an AI is in some way derivative of the work it has been given, which in some cases includes artistic work whose authors did not consent to its use for training an AI. If this is allowed to continue, actual human artists will find it even harder to make a living than they already did while computers generate soulless copies of their work.

Misinformation will Outweigh Facts

Misinformation on the internet and in the media is already a concern, but the problem could get worse by orders of magnitude thanks to AI. A study by the Europol Innovation Lab predicts that 90% of content on the internet will be synthetically generated by 2026. That is not to say that it will specifically be misinformation, but this circles back to the lack of ethics in AI. A text generator is just trying to recreate the patterns it found in the data, without any regard for whether it is saying something true. A good example of this was a couple of lawyers who used ChatGPT to write a document that made legal arguments based on cases that did not actually exist. ChatGPT “knew” that lawyers used precedents set by past cases to make arguments, but did not know that those cases needed to actually have happened.

In addition to content that is unintentionally misleading or wrong, there is also a concern that bad actors could intentionally use AI to generate false news stories (complete with fake photographic evidence). My worry is not even that people will fall for these fakes; I am more concerned that their existence will erode any trust in the truth. If half of the stories you read are fake, when will we know when something is actually true.

Conclusion

Artificial Intelligence has the potential to revolutionize the lives of every person on Earth, making our lives easier, healthier, and more efficient. It can be tempting to want to dive in with both feet and try out this tech in anything and everything, but the potential downsides are very real. Some have even come true already. If humanity is going to avoid doing major harm to itself in the coming years, we need to closely regulate how AI is used to the best of our ability. I worry because I expect more recklessness than caution.

  Second Article:                  Why the Bowl Championship Series Failed                  Sept 8, 2023

One of my hobbies is to create mathematical models to rank sports teams. I have done this for basketball, soccer, football, and volleyball. I find enjoyment in taking past results and using them to make predictions about future games.

Between 1999 and 2014, using models to rank teams was not just a fun math exercise; it played a major role in determining which teams got to play for the NCAA football National Championship. During that timeframe, two teams would get selected for the National Championship game based on several polls as well as 6 computer rankings. The motivation for introducing the computer rankings was to try to remove some of the bias from the selection process. What it actually did was create confusion about how teams were ranked, made a lot of people very angry, and proved that college football needed to expand their playoffs (which they did in 2014 when they created a 4-team playoff and they will do again with a 12-team playoff starting in 2026).

So why did the computer rankings bomb so hard? Were the computer models at fault or were football fans just being salty that their team got left out of the championship game? Let’s look at why the BCS was so unpopular.

Computers Are Unbiased, Right?

One of the motivations behind using computer rankings was avoiding the kind of bias that a person would typically have. A person is prone to assuming that certain teams are better than others based on their historical performance (i.e. we expect Alabama to be better than Boise State because Alabama has been better in the past). A computer model will not have such a bias. The teams’ names and records from previous seasons are not used as input, only scores from the current season, so there is no such historical bias.

The problem is that a computer ranking can still have other types of bias. For example, the model might put too much weight on strength of schedule, rewarding teams for simply playing tough opponents rather than winning. Another model might put too little weight on strength of schedule, rewarding a team for winning even if their opponents were all cupcakes. Hypothetically, the variation in the models could be seen as a positive. It is like having a random forest of decision trees that each have their own strengths and weaknesses, but together produce better results than any individual model could. However, that is not how the football-loving public perceived the BCS. Instead, when the 6 models disagreed with one another, it was viewed as evidence that they were flawed, decreasing confidence in the models. Another factor that decreased trust in the models was that no one knew how the models actually worked…

Bowl Championship Secret

The 6 computer models used in the BCS were all developed by individuals who chose not to release the details of how their models worked. Since the demise of the BCS, some of those details have become public knowledge and it strikes me as strange that these were ever secret to begin with. It is not as if the models used some proprietary technique worth keeping under wraps.

The result of keeping secrets, like in a relationship, is that it bred mistrust. When football fans disagreed with the rankings, they would point the blame at the black box computer models. Not knowing how they worked, it was natural to assume that there was something wrong with them. The BCS might have been able to avoid this problem if they made an effort to elucidate how the computer models worked, but that might have been difficult because…

The Models Were Too Complicated

While the math behind the models was not that complex (you would not need a PhD in math to understand it), it was still too complicated for the average football fan to get their head around. What is a probability distribution? What is a matrix? While I personally enjoy mixing math and sports, most people don’t want to do calculus in order to understand their favorite sport. Maybe this is the reason the models were not released: people would not have gotten much out of it anyway. Or possibly, the models were kept secret because…

The Models Were Too Simple

While the average person would not be able to decipher the math behind the computer models, anyone who took a statistics class and a linear algebra class would be. Perhaps the models were kept behind closed doors to protect them from scrutiny. After the BCS was replaced by the College Football Playoff and some of the models were made available, such scrutiny did follow. For example, Dr. Gregory Matthews posted a review of the BCS on his blog in which he pointed out that the Colley Matrix model does not actually care who wins a given game, it only cares about how many wins a team gets and who they played. If this information had been known during the time it was used in the BCS, it would have been a serious blow to the legitimacy of the computer rankings.

There was one factor that forced the computer models to be simpler than they probably should have been…

Don’t Run Up The Score

In college football, there is an unwritten rule that a team should not try to score more points if they are already up by a large margin and there are only a few minutes remaining. This rule that exists for the sake of sportsmanship would have been at odds with any computer model that rewards teams more the larger their margin of victory. Thus, a decision was made that none of the models could incorporate margin of victory into their rankings. While this choice was well-motivated, it did hamstring the models in a pretty major way. Models that incorporate the scores of games tend to perform better than ones that only use wins and losses. The computer models could have been improved if the BCS could have found a balance between using the actual scores while not encouraging running up the score. One potential solution would have been treating all wins by 14 or more as if they were wins by 14 so there was not an incentive to increase an already insurmountable lead.

With or Without Computers, the BCS was Flawed

So far, I have been concentrating on the BCS computer models problems, both real and imagined. The truth is, the biggest problem with the BCS had nothing to do with the computers. The biggest problem was that choosing just two teams was a hopeless exercise no matter how it was done.

During the BCS era, the number of teams in the highest division of college football ranged from 114 to 126 (it changes because some teams chose to change divisions). Each team plays just 12 or 13 games during the regular season, meaning they only play about one-tenth of the remaining teams. The strength of schedule (how difficult a team’s opponents are) varies wildly from one team to another, making it even harder to compare. Often, the BCS was tasked with comparing teams that did not play one another and had few to none common opponents. The problem with this is not that it is hard to choose the correct teams to be in the national championship, it is that there is no correct choice. Most seasons, there were more than 2 teams that had resumes strong enough to be considered for the title game. No matter who got left out, someone would be angry.

Fixing the Problem with Playoffs

The BCS was finally replaced in 2014 by the College Football Playoff. Gone were the computer rankings. More importantly, 4 teams would now be chosen to compete for the national championship instead of 2. This did not end the controversies and criticism about who was chosen and who got left out, but it is better to argue over who is 4th best rather than 2nd best.

The playoffs will further expand in 2026 when 12 teams will be invited to playoff for the championship. This is a huge step in the right direction and fixes the biggest issue that plagued the BCS. There will still be people arguing about who should have been selected and who should get a bye, but at least this will let the players decide who is the best on the field.

These are just the tip of iceberg, as to how deep into data analysis he was. I can post a couple more later.  

Norm Schenck, David's dad 

David was with me through the brightest and darkest moments of my entire adult life. He celebrated my joys and helped carry my sorrows. In one of my hardest times recently, he told me I made every place I ever entered better.

But the truth is, my world was better because he was in it.

  When I arrived in Colorado last week, the plane has just landed in Denver, and music came on as we were waiting to exit the plane. One of the songs was one called Hallelujah, and I found that song to be very calming, without knowing why. When I got home, I determined that the rendition I heard, was by Rufus Wainwright. I felt I had a strong connection to the song, again without knowing why. I found the lyrics difficult to interpret, so I looked up interpretations of the song. This is the first one to pop up. 

                     "This is not a victory song. It's not about a perfect, pure kind of faith or love. It's about how                                life breaks you down, how sometimes all you have left is a Hallelujah-even if it's a broken one"   

That resonated with me. 

  As some, maybe many, of you know, David had an extreme interest, even fascination, with Astro Physics and in understanding how the universe works. It is apparent to me now, that David's vehicle in this life, his bodily health, had failed him, and that he had decided to move on to the next level, to continue his quest to understand the universe. Now he has moved to where his bodily limitations are no longer an impediment.

  Yes, we are saddened that he is gone from our world, but we can also take solace that he is in a better place now. Life here had broken him down, and he decided to move on. All he had left was a Hallelujah.  

Norm      David's dad       

Helping hands

In lieu of flowers

Please consider a donation to any cause of your choice.
$750.00
Raised by 7 people

 When I think about David's life, I can't help thinking about Elton John's song, Candle in the Wind. In that song, I believe that the Candle, or more specifically, the flame on the burning candle, represents a person's life and the brilliance of that life. I believe that the Wind represents the adversities in a person's life, essentially what life has dealt to that person during that life, the challenges to continuing that life. I think the real subject of the song, is talking about how fragile people can be in their lives, because of the influence of the Wind.  

 Yes, that song was written about Marilyn Monroe, and a later version was dedicated to Princess Diana. Each of those ladies had a brilliance about them, and in their cases, a brilliance that was recognized by millions of people. What few people realized, was the degree of their fragility, which became much more evident after they had passed. 

 David had a brilliance about him also, a much more subtle and understated brilliance, an intellectual and professional brilliance. What very few of us realized, maybe none of us, was his degree of fragility. Only now, do we realize that. We will all miss him very much, but we can also celebrate the brilliance of his character and his life, and agree that it was way too short.   

 David, my son, rest in peace. Rest in peace.    Dad        

I enjoyed watching David grow up tagging along with his big sister and Jennifer. When he was young I had a front row seat watching his love of K’NEX and basketball. When he was in middle school I got to be his ride to school for a semester and I  always remember coming prepared with a question for short ride. He always took the early morning inquiry with grace and answered it… even though being a 12 year old kid riding with his big sister’s mom to school was likely not favorite hotbed of conversation. He is loved, remembered and missed. 
David had finished onboarding at cQuant shortly before I joined. It was reassuring to see a person who had a background in physics like me (also at CU) flexing those acquired skills to excel in a position in an "unrelated" field. 

He set an extremely high bar to follow as he got up to speed faster than anyone had previously. Despite being immersed in the energy industry for a relatively short time, David was an invaluable resource for me in those first few months. In the rare case that he didn't know the answer to a question I asked, he either knew how to find out or knew who to go to as a next step. But when he did know the answer, his explanations were exceptional. It was immediately clear that he was not only gifted at teaching, but had also spent significant effort honing that skill and would adapt his explanations to his audience. 

I worked closely with David on a few projects during the year and a half we shared at cQuant. He was always dependable, communicative, efficient, and a joy to work with. Others have noted it, but his sense of humor frequently caught me off guard in the best of ways. I occasionally was able to pry a smile out of him, and that always felt like a great accomplishment. 

In times like these, words feel horribly inadequate. David was excellent. His loss has been a great blow to me, both personally and professionally. I can only imagine how terrible this has been, and will continue to be, for his close friends and family. Please know that David made a tremendous impact and will be missed by many. 

I worked with David on many projects over the years at cQuant. David was just wonderful to work and spend time with. David was exceptionally clever. This came through not only in the quality of his work, but he was also quick on his feet in conversation and would make coworkers and clients laugh and smile when they weren't expecting to at just the right time.

Recently I had been working on some products that David helped me validate. I was always so grateful for his quick understanding of technical concepts, his excited curiosity, and a really deep empathy for people. These qualities made him the absolute best to share ideas with. When I had ideas, I could always count on David because I knew he would know what I meant right away and ask all the right questions to get something headed in the right direction.

I am glad I got to know David. I will miss him dearly and my thoughts and condolences go out to everyone else who had the privilege of knowing him.

  When David was in his early years of grade school, he had gotten behind in the subject of arithmetic. One of his teachers mentioned that to me, so I made up a stack of flash cards that David and I would go through after dinner, every evening.  Within a few weeks, David was coming up with the right answers to each one, within a second or two. I added some harder ones, and he quickly got those too. He admitted to me years later, many years later, that using those flash cards made something click in his brain. He said that math became easy after that. He graduated from High School with a 4.2 GPA, and went on to the University of AZ. He took many math courses there, in the quest for his degrees in physics and astronomy.

  The math dept professors at the UofA were notoriously bad, as they were when my brother and I had been there, 30 some years earlier. So, David would get extra books from the library and teach himself the subject at hand. He took forms of math, and to this day, I can't tell you what they are for. When I mentioned that I had a difficult time with Calculus classes there, he said that Calculus was not only easy, but fun.  I will never assign either of those adjectives to the subject of Calculus. 

 David's ability to grasp very complex subjects, was a marvel to me, and was light years beyond my abilities. And beyond that, David's ability to explain complex concepts to other people, in terms that they could understand, was something that amazed me.   Norm Schenck/David's dad 

      

I forgot I had this photo - D…
2024, Tucson, AZ, USA
I forgot I had this photo - David is the only person whose lap Jupiter would sit in
cQuant Tech Team Dinner at th…
2025, Boulder, CO, USA
cQuant Tech Team Dinner at the Bohemian Biergarten
I had the privilege of being David's manager at cQuant for his first six months at the company. As others have noted, David was exceptional, and this was immediately apparent not only in how quickly he was able to grasp the complex and nuanced intricacies of cQuant's energy analytics software solutions, but also in the way he was able to articulate those concepts to others. David was a natural educator. He had an amazing ability to explain concepts to others in a way they could actually grasp them. I think this was partly because David was able to develop such a deep understanding of the concepts himself that he could explain every detail in multiple different ways, enabling him to tailor his message to suit his audience. However, importantly, I think it was also because David genuinely enjoyed helping others learn and grow, and he took personal pride in seeing their development. David led an important session earlier this year at cQuant's first ever "cQuant University", a customer conference aimed at training cQuant's users on methodology and best practices for use of its software. His presentation gracefully conveyed complex mathematical concepts in a way that even the newest and least technical cQuant users could grasp. I was impressed. This made me immediately think of David in connection with our efforts to develop relationships and partner on curriculum development initiatives with technical universities in Asia as a way to support our expansion into that market. I had introduced this idea to David, asking if he'd like to be involved, and David was enthusiastic about participating, saying that it sounded like something he would very much enjoy. I'm greatly saddened to know I won't get to work with him on this project.David, we're going to miss you, and we'll remember and emulate your high-standard for educating both customers and cQuant team members. 
David is a quiet giving person who is always exceptional and never spends any time emphasizing his exceptionality. He just does what he thinks best and is generally right. He is very good at understanding deeply what people need and doing it. It is hard to explain how "seen" this makes people feel, and he also has a great respect for the privacy of others and manages to make this seem attentive instead of invasive. I do not know David very well, but all of my time with him is thought-provoking, interesting, and quietly humorous.I will remember David as exceptional, thoughtful, and humble. He will be missed. To his (better) friends and family, please accept my condolences.

 Please allow me to brag about my kids. They have both achieved levels of professionalism and intellect, that are way beyond mine. Jennie earned a degree in Psychology from a college near Portland Oregon. And when she graduated, she said "dad, that was too easy. I need something harder".  I remember thinking to myself , "too easy! TOO EASY?" Then she put herself through law school, and passed the bar exam in Oregon on the first try. The Oregon bar exam is known as one of the most difficult in our nation. 

David started his college years at the University of Arizona, and elected to go into Mechanical Engineering. After the first semester, he switched over to Physics AND Astronomy. In the remaining 7 semesters, he earned Bachelor's degrees in both of those, and a minor in math, with  straight A's for the entire 4 years. He then went to the University of Colorado (CU) in Boulder, for graduate studies in Astro Physics. He earned a Master's degree there, and was working on a PhD, when he was informed that the program he was working on, had run out of funds. About the same time, he realized  that there were very few jobs out there, for guys with a PhD in Astro Physics.  So, he switched over to the Education college, and got a teaching degree, as well as enough more math to change his minor in Math from the UofA, into a Bachelor's. He then taught math at the Boulder High School. That wasn't enough for David.

 He resigned from that teaching position, and took a very long, very intense course in Data Analysis. He then landed a wonderful job at cQuant. I'm told that he became a star there. That doesn't surprise me at all. It appears that his work at cQuant became the highlight of his life. When I would periodically contact him and ask how things were going, all he talked about was his work at CQuant. 

 I am humbled by what Jennie and David have achieved. You may thing I'm bragging, and you're probably right, but I hope that the way they were raised, is at least a factor in their success. I must say, that my pride in them, is extremely strong. My sadness about David's passing, is also extreme.

Norm Schenck/ Jennie and David's dad           

Comments:
  • Please make sure you've written a comment before it can be published. If you prefer to remove your comment, you can delete it.
  • Sorry, we had some trouble updating your comment.

I had the pleasure of working closely with David for the last couple of years at cQuant. From the moment he joined our team, he exceeded every expectation for how quickly someone could get up to speed.

David had a wonderful sense of humor, and his dry delivery was something I appreciated every day. There were many times when I had to turn off my microphone and camera because I was laughing so hard at something he had said.

Even after leaving BHS to join us, David was always a teacher at heart. Like all great teachers, he loved to learn and took real joy in helping others learn. His approach to projects changed the way our whole team thought about training and sharing knowledge. He was often described by both colleagues and customers as someone who was always ready to help and could be counted on whenever you needed him.

David was an excellent person, both professionally and personally. His influence on our team will stay with us for a long time. I will miss him deeply.

Sending love to his family and all who cared for him.

Comments:
  • Please make sure you've written a comment before it can be published. If you prefer to remove your comment, you can delete it.
  • Sorry, we had some trouble updating your comment.
Thanksgiving
Comments:
  • Please make sure you've written a comment before it can be published. If you prefer to remove your comment, you can delete it.
  • Sorry, we had some trouble updating your comment.

I worked with David at cQuant for a few years. We were in different groups, but we interacted semi-regularly. David was great to work with. Always friendly, helpful, detailed, knowledgeable, and excellent at running projects. If David was running a project I brought in, I knew that no matter how wonky or complex it might be, that it would be handled well and ultimately be a success. All of my clients loved working with him and all had great things to say about him. I enjoyed all my interactions with David. He will be missed. 

Much love to David, his family, and his loved ones. 

When we were growing up, you could always count on David to be a scary dinosaur. I can vividly remember him chasing us around with little T-Rex arms. I think our entire family had The Land Before Time movie memorized because it was his favorite and we watched it so often.

As we got older we did not see each other as much, but Thanksgiving was the one time of the year we were together- usually playing football or basketball in the front yard with our Uncles.

David was athletic and competitive. He seemed knowledgeable about every sport. One time Jenny and David came over for dinner, and he ended up in the backyard teaching Jonny some new soccer moves.

When Hailie was in 5th grade, David used the two of us for a research assignment he had in one of his education classes. It was a cognitive interview focused on how adults and children answer math and science questions. We completed the observation via skype, and afterward he gently explained why he thought Hailie performed better than I did—crediting her Montessori education. I truly enjoyed being invited to be part of his project—he was always interesting and intelligent.

More recently, I met Jenny and David at Sabino Canyon for a hike. It was peaceful, and I remember looking at him and thinking that he reminded me of Uncle Harry when we were kids. 

But the memory that stands out is being chased by that silly little dinosaur. My heart is broken for what my family has lost. Sending my love to everyone. 

Comments:
  • Please make sure you've written a comment before it can be published. If you prefer to remove your comment, you can delete it.
  • Sorry, we had some trouble updating your comment.
Remember when we had dinner at your dad’s house? Your grandma and I asked such elemental questions about the universe. Even though your depth of understanding was so amazingly immense, you were able to answer our questions in such a way that even we could understand. Your dad simply smiled with pride. Now you get to expand your awareness of the grand universe. Enjoy the serenity until we all meet again.

 When David was 10-14, he played basketball in a YMCA league, year around, four seasons a year.  When he started, he was not a particularly good player. When he finished the league, he was one of the best in the league, and was the undisputed leader of his team. Throughout most of that four year period, there was another team that was David's team's rival. The season championship game of the very last season was between David's team and the rival team. 

  Over those years, the coach for David's team, Ed Lopez, noticed that the intensity of  his players during the games, was based on David's intensity. When Ed asked David to "turn it up", David did, and so did the rest of the team. The "turn it up" request was made part way though that game, and David's team won that game by a narrow margin, because they all "turned it up". 

 For those of you who knew David well, you probably know that he always put forth his best effort at whatever he did, at everything he did. That was his M. O. from a very early age.   

Norm Schenck/David's dad   

Remember when we decided to adopt 2 kittens (which was probably negligent given that we were living in someone else's house and didn't ask them)? We totally forgot to bring cash when we were adopting them so Mom had to write a check. We had already decided to name one of the kittens Jupiter because it was space themed and because of that All That game show skit we always thought was so funny when were kids. We hadn't come up with a name for the other one until we were driving home and you saw him relaxing in his carrier like it was a hammock or something - you said "he looks so calm" and then were like "his name is Comet!"

Well now you are in the stars with Comet and I'm here on Earth with Jupiter. We will miss you down here, but I'm glad you have someone so weird and entertaining to keep you company. 

-Jennie

Want to see more?

Get notified when new photos, stories and other important updates are shared.

Get grief support

Connect with others in a formal or informal capacity.

Recent contributions

$50.00
Anonymous
$100.00
Anonymous
$250.00
Anonymous
See all contributionsRight arrow
×

Stay in the loop

David Schenck