Let us convince you with greek letters, lots of them.
At the onset of Campaign Week, with list members focused on selling themselves in return for lucrative votes, friendships on the line, and emotions running especially high, we here at the Quarterly believe it would be wise to refocus and reframe what exactly happens during this manic time. Given that qualitative speculation can be undertaken by anyone and everyone and has little weight in the grand scheme of things, we instead decided to take a quantitative approach, identifying several key indicators, collecting data, and creating an all-encompassing algorithm that grades the performance of each list running for election.
In the following report, we will walk you through the indicators we chose and why we chose them, the data we collected, and the equations we created to express success, or lack thereof, in a particular category. Finally, we will show you the weights we attributed to each indicator, walk you through the fuzzy math quickly, and end up with a grade on our beloved French “1 through 20” scale for each list. Hopefully, none of you will need to stay late into May for the makeup exam.
In selecting our various indicators, we defined two aspects of a list to be paramount to most others: its reach and its likeability. If you don’t hear from a list, you won’t vote for it, and if you don’t like the list’s members, you won’t vote for them. We also weeded out several possible indicators, including “Number of Occasions On Which Free Food is Provided”, as we soon understood that free food was, in fact, anywhere and everywhere for the taking. We contemplated questioning the relative “sportiness” of the two AS lists and the “studiousness” of the BDE lists but, too, realized that those were fruitless endeavors – everyone runs around during Campaign Week and no one studies. Ultimately, we landed on five key indicators, some of which we broke down into two separate multipliers. We will first tackle our “reach” category and then move onto “likeability”.
Diversity: Always Key – We identify diversity as an especially important asset to have within a list, to be able to reach the greatest amount of people, both from France and abroad, and create the largest friend and peer network possible. Distinguishing between a friend and a peer, although a subtle difference, is crucial to us as well, leading us to divide our diversity indicator into two separate ratios, one representing program diversity and the other representing association spread. Just as a list comprised of 14 Eurams will most likely not capture the Euraf vote, a list with very little reach in the way of associations may not be able to be heard by as many people as a list with high reach in this regard. In Table 1, pictured below, you will find both the program breakdown and the total associations represented by each list. To note, we allowed association representation overlap, meaning that if two list members were part of the same association, we count it twice, instead of once, as intra-association divisions, such as poles, may limit reach within an association.
We then introduce the notion of deviance, which, in this context, is deviance from perfect equal program representation within a list. Effectively, we define perfect equality as an equal number of Eurafs as Eurams on a list, and deviance is calculated by subtracting the “perfect middle” from the “dominant” list number. For instance, in a list of 14 members, perfect equality would be defined by 7 Eurams and 7 Eurafs, while, in the case of SmASh Bros, their “dominant” number was 9, meaning that their resulting deviance is 2. In order to obtain this first multiplier, which is a number between 0 and 1, we create the following expression, shown in Figure 1, where ρ is perfect equality and β is deviance.
Deviance from Perfect Equality
From this, we input all of the data shown in Table 1 to receive our first ratios for each list, shown in Table 2 below.
Having obtained one half of our first fully expressed indicator, we move on to the latter half, being our association participation (AP) ratio, which we compute using the following expression represented by Figure 2, where Γ is a given list’s association participation number and γ is the highest association participation number among all lists. We will be referring back to a similar model several times throughout this report, so to make clear the recurring variable γ that will serve as the denominator for this specific model, we explain that it refers to the performance of the best performing list within a certain category. In the following figure, it will refer to the list with the greatest association participation, but you will see it in other similar contexts later, representing different notions. This was our way of “setting the bar” in some categories, so that comparison was not absolute, but relative where appropriate.
Association Participation Ratio
By inputting data pulled from Table 1 above, we receive Table 3, which shows each list’s AP ratio, the penultimate step before we are able to compute our final comprehensive diversity multiplier.
Finally, multiplying the ratios of each list, we receive each group’s diversity multiplier. However, the result in Table 4 will differ from the result you get by multiplying one list value by the other, as, mentioned earlier, we’ve attributed weights to each indicator. We’ve given the diversity indicator a weight of 30%, matching the qualitative importance we’ve attributed to it, which means that we will multiply the resulting value by 0.3, as seen in Figure 3.
|Weighted Diversity Multiplier|
Party Politics –The next best way we find that a list can appeal to and reach as many voters as possible is via the classic party. Whether it’s in a laundromat, several different apartments, or outside, a party is a party, and for us, it is quantity that matters. The more parties your list organizes, either solo or in concert with another list, the higher your multiplier. After scouring all of the official schedules posted by each list on their respective sites, we compile the number of parties, which includes bar nights but does not include pre-games, that each list throws, shown in Table 5.
We use a relative simple ratio here in Figure 4, where θ is the number of parties a given list throws and γ is the highest party organization number among all lists. We attribute a weight of 10% to this multiplier because of both party overlap and disassociation that may occur. Especially for parties that are are cross-list efforts, their relative success, as it pertains to the reach of a given list, is diluted by the fact that other lists are present and developing their reach at the same time. On top of this, for bureaus who, generally, are not so focused on party planning and other such social events, voters may disassociate them from the event they attended, even if, officially, they organized it.
Party Ratio Multiplier
Inserting data from Table 5 into the expression shown in Figure 4, we receive our weighted party ratio multiplier for each list, shown in Table 6.
|Weighted Party Ratio Multiplier|
The War of the Videos –Each list’s true first impression comes with the video that they release. Its originality, artistic value, and clarity are indicators that, in any other regard, are fine ways to compare one video to another. However, because we are yet to be able to quantify beauty or Paul-Henry Thiard’s editing prowess accurately, we revert back to what Facebook offers us: video views and an assortment of reactions. We compile a list of the number of views each video got (rounded to the nearest hundred by Facebook analytics), as well as the number and type of reactions each video received in Table 7.
|Love Reacts||Like Reacts||Wow Reacts||Laugh Reacts||Total Reactions||Total Views|
To properly gauge the success of each list’s video, we bifurcate our multiplier to cover both its reach and likeability, accomplishing the former through a video view ratio (VVR) and the latter through a video reaction ratio (VRR). The former figure is similar to many we’ve already created, in that it compares top performers within a field to others, here comparing video views between lists. However, we decide to narrow the scope of this ratio, comparing the reach of each video between bureaus, and not all lists. It’s well-known that the Bureau des Elèves race is often the most popular or recognized one, which is reflected in our data which shows both BDE lists with the most video views among all other lists; as such, comparing views in a cross-bureau fashion unnecessarily lowers the scores of other lists, who are not directly competing with BDE lists for reach in this sense. In Figure 5, we will express this VVR using κ for the number of views a list received and γ for the highest view count among the two (or one) lists running for election within a specific bureau. Table 8, then, will show the ensuing VVRs for each list.
Video View Ratio
|Video View Ratio|
Our video reaction ratio, on the other hand, is an absolute measurement that compares the popular success of each video, using Facebook’s predetermined reaction icons to value videos individually and through the digital responses they elicit. We define the number of “love reacts” as the variable ψ, the number of “like reacts” as η, the number of “wow reacts” as ζ, and the number of “laugh reacts” as ω, while the total reactions of a given list’s video will be defined by φ, Moreover, knowing well that a “love react” is far more powerful than any “like react” could ever be, we take this into consideration and respond by giving each type of reaction a different multiplier that we find to be in line with the underlying sentiment that is carried with it. We hold that there is no higher praise than a “love react”, so, for this reason, we grant it a 1,0x multiplier, while the “like react” receives a 0.8x multiplier, the “wow” react a 0.9x multiplier, and the “laugh react” a 0,6x multiplier. Figure 6 consolidates all of this information into a singular expression, while Table 9 gives the resulting values.
Video Reaction Ratio
|Video Reaction Ratio|
Combining Figures 5 and 6 and adding a weight of 20% to our video multiplier, applied given the torch-bearing role campaign videos play, we end up with Figure 7, from which we are able to calculate the final values of the video multiplier for each list in Table 10, thereby effectively settling the war of the videos. Don’t be mad at us if the results don’t turn out in your favor, be mad at the math.
|Overall Weighted Video Multiplier|
Follow for Follow – While, for most, it’s an unhealthy obsession, the Quarterly finds merit in looking into each list’s follow-back propensity on their Instagram profile, as a good amount about a list’s likeability can be gleaned from that singular statistic. Whether you want to stay up to date on Campaign Week events or see your friends pose in front of their list banner like proud parents, the follow-back propensity statistic simultaneously relays an amount of people who generally like your list or the things your list offers while safeguarding from mass following behaviors that are bound to bring in followers eventually, at the expense of the ratio. In looking at this propensity, we aim to analyze the most authentic statistic we can find in the realm of social media during Campaign Week. To calculate this propensity, we take the list’s number of accounts it is followed by, and divide that by the number of accounts it is following, data which can be found in Table 11.
|Accounts Followed By||Accounts Following||Follow-Back Propensity|
To apply the same relative methodology, as it is unreasonable to expect a 1:1 follow-back propensity, we will use a similar model to calculate our follow-back propensity with a weight of 10%, fitting to the statistic’s lesser influence in voting, as we did in the party ratio and the association participation ratio. Our numerator in Figure 8 will be the given list’s follow-back propensity, indicated by δ, while the denominator, denoted with a γ, will represent the highest follow-back propensity among all list accounts, with weighted values shown in Table 12.
Weighted Relative Follow-Back Propensity
|Weighted Relative Follow-Back Propensity|
The Beer Poll –A legendary phenomenon, the beer poll asks those who wish to take it one simple question: “Who would you rather sit down for a cold one with?” Despite its unassuming and largely unacademic roots, the beer poll has successfully been able to predict every US President since its inception, and we’d like to bring the same unflappable poll to our own elections here in Reims. Having slightly amended the question, which now reads “On a scale from 1 to 10, how much would you enjoy having a beer with the following lists?”, we sent the poll out to SciencesPo students, limiting responses to one per user and anonymizing responses, and received 55 responses in total, a number we can confidently use as a sample size of our voter population. Moreover, it is important to note that, for as many partisan, biased voters from lists there were that filled in 10s for their list and 1s for the counterlist, there are as many voters on the opposite side of that spectrum that could and did the same, thereby diluting the effects that the other had on the end result of the beer poll. Without further ado, results in Table 13!
|Beer Poll Average|
For our last multiplier, we maintain an absolute perspective, dividing each average by 10 to receive a number between 0 and 1 (Table 14) that is representative of a list’s collective likeability, and adding a 25% weight to the expression shown in Figure 9. After all, who are we to look down upon the efficacy and trustworthiness of a poll with 100% accuracy?
Weighted Beer Poll Multiplier
|Weighted Beer Poll Multiplier|
A Secret Indicator
Knowledge is Power –At the Quarterly, we believe that knowledge trumps all. Sticking true to that tenet, we have one final special indicator that we call the “Knowledge is Power” indicator (KiP) that attempts to quantify each list’s combined brainpower and express it through a number. When all was said and done, it wasn’t too hard (Table 15).
|Team Members||Team Members that Purchased the Quarterly’s First Edition|
Weighting this indicator and, in extension, ourselves, 5%, we know that we, and, more importantly, the formal media in general, have very minimal impact on who votes which way and which list runs a better campaign. While this indicator was something to be taken less seriously in light of the 15 tables and 9 expressions we flooded you with above, we want to also make a more serious plea to student publications and groups of students with ideas to create publications to enliven and take this week in a direction it has never been taken before. Media is a force in politics in the real world, why shouldn’t it be a force at SciencesPo? Finally, defining σ as the number of list members who purchased the Quarterly’s First Edition and χ as the total number of list members of a given list, we create Figure 10, which, in turn, yields Table 16.
Knowledge is Power Multiplier
|Knowledge is Power Multiplier|
Pushing all of the multipliers together into one gorgeous, incessantly complex-looking expression yields the beautiful chaos that is Figure 11 (it deserves a more grandiose name, doesn’t it?). We multiply by 20 to convert our number into a proper grade, and Table 17 will show each list’s final transcript. One final exam. One grade. No retakes. And no harmonization.
The Final Grade
As a final note from us at the Quarterly, we would like to thank each and every list member profoundly, not only for their help in the data collection process, but for, together, creating such a dynamic week. We’ll be back next year, with more data, more advanced algorithms, and more coverage of what exactly happens behind the scenes of Campaign Week. All through an economic lens, of course.
Note from the Quarterly: This report was initially due for publication during Campaign Week, however, following a contention raised by the Campaign Committee, we were forced to delay publication until after the votes were counted and announced. We find it important to note that we in no way changed our data to reflect the results of the elections; the data we had compiled and the algorithm we had created remain unchanged.