Listen to yourself!: Spotify, Ancestry DNA, and the Fortunes of Race Science in the Twenty-First Century

If you could listen to your DNA, what would it sound like? A few answers, at random: In 1986, the biologist and amateur musician Susumo Ohno assigned pitches to the nucleotides that make up the DNA sequence of the protein immunoglobulin, and played them in order. The gene, to his surprise, sounded like Chopin.

With the advent of personalized DNA sequencing, a British composition studio will do one better, offering a bespoke three-minute suite based on your DNA’s unique signature, recorded by professional soloists—for a 300GBP basic package; or 399GBP for a full orchestral arrangement.

But the most recent answer to this question comes from the genealogy website Ancestry.com, which in Fall 2018 partnered with Spotify to offer personalized playlists built from your DNA’s regional makeup. For a comparatively meager $99 (and a small bottle’s worth of saliva) you can now not only know your heritage, but, in the words of Ancestry executive Vineet Mehra, “experience” it. Music becomes you, and through music, you can become yourself.

screencap by SO! ed JS

As someone who researches for a living the history of connections between music and genetics I am perhaps not the target audience for this collaboration. My instinct is to look past the ways it might seem innocuous, or even comical­—especially when cast against the troubling history of the use of music in the rhetoric of American eugenics, and the darker ways that the specter of debunked race science has recently returned to influence our contemporary politics.

During the launch window of the Spotify collaboration, the purchase of a DNA kit was not required, so in the spirit of due diligence I handed over to Spotify what I know of my background: English, Scottish, a little Swedish, a color chart of whites of various shade. (This trial period has since ended, so I have not been able to replicate these results—however, some sample “regional” playlists can be found on the collaboration homepage).

screen capture by SO! editor JLS

While I mentally prepared myself to experience the sounds of my own extreme whiteness, Ancestry and Spotify avoid the trap of overtly racialized categories. In my playlist, Grime artist Wiley is accorded the same Englishness as the Cure. And ‘Scottish-Irish’, still often a lazy shorthand for ‘White’, boasted more artists of color than any other category. Following how the genetic tests themselves work, geography, rather than ethnicity, guides the algorithm’s hand.

As might be expected, the playlists lean toward Spotify’s most popular sounds: “song machine” pop, and hip-hop. But in smaller regions with less music in Spotify’s catalog, the results were more eclectic—one of the few entries of Swedish music in my playlist was an album of Duke Ellington covers from a Stockholm-based big band, hardly a Swedish “national sound.”  Instead, the music’s national identity is located outside of the sounding object, in the information surrounding it, namely the location tag associated with the recording. In other words: this is a nationalism of metadata.

One of the common responses to the Ancestry-Spotify partnership was, as, succinctly expressed by Sarah Zhang at The Atlantic: ‘Your DNA is not your culture’. But because of the muting of musical sound in favor of metadata, we might go further: in Spotify’s catalog, your culture is not even your culture. The collaboration works because of two abstractions—the first, from DNA, to a statistical expression of probable geographic origin; and second from musical sound and style characteristics, to metadata tags for a particular artist’s location. In both of these moves, traditional sites of social meaning—sounding music, and regional or familial cultural practice—are vacated.

Synthetic Memetic / Matthew Gardiner (AU): Gardiner composed a DNA sequence in such a way that the series of nucleotide bases in it correspond to the letters of the song title “Never Gonna Give You Up” by Rick Astley, and then integrated them symbolically into a pistol. Credit: Sergio Redruello / LABoral Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0)

There is a way in which this model could come across as subversive (which has not gone unnoticed by Ancestry’s advertising team). Hijacking the presumed whiteness of a Scotland or a Sweden to introduce new music by communities previously barred from the possibility of ‘Scotishness’ or ‘Swedishness’ could be a tremendously powerful way of building empathy. It could rebut the very possibility of an ethno-state. But the history of music and genetics suggests we might have less cause for optimism.

In the 1860s, Francis Galton, coiner of the word ‘eugenics’, turned to music to back up his nascent theory of ‘hereditary genius’—that artistic talent, alongside intelligence, madness, and other qualities were inherited, not acquired. In Galton’s view, musical ability was the surest proof that talents were inherited, not learned, for how else could child prodigies stir the soul in ways that seem beyond their years? The fact of music’s irreducibility, its romantic quality of transcendence, was for Galton what made it the surest form of scientific proof.

Galton’s ideas flourished in America in the first decades of the twentieth century. And while American eugenics is rightly remembered for its violence—from a sequence of forced sterilization laws beginning with Indiana in 1907, to ever-tightening restrictions on immigration, and scientific propaganda against “miscegenation” under Jim Crow—its impact was felt in every area of life, including music. The Eugenics Record Office, the country’s leading eugenic research institution, mounted multiple studies on the inheritance of musical talent, following Galton’s idea that musical ability offered an especially persuasive test-case for the broader theory of heritability. For 10 years the Eastman School of Music experimented on its newly admitted students using a newly-developed kind of “musical IQ test”, psychologist Carl Seashore’s “Measures of Musical Talent”, and Seashore himself presented results from his tests at the Second International Congress of Eugenics in New York in 1923, the largest gathering of the global eugenics movement ever to take place. His conclusion: that musical ability was innate and inherited—and if this was true for music, why not for criminality, or degeneracy, or any other social ill?

From “The Measurement of Musical Talent,” Carl E. Seashore, The Musical Quarterly Vol. 1, No. 1 (Jan., 1915), p. 125.

Next to the tragedy of the early twentieth century, Spotify and Ancestry teaming up seems more like a farce. But scientific racism is making a comeback. Bell Curve author Charles Murray’s career is enjoying a second wind. Border patrol agents hunt “fraudulent families” based on DNA swabs, and the FBI searches consumer DNA databases without customer’s knowledge. ‘Unite the Right’ rally organizer Jason Kessler ranked races by IQ, live on NPR.. And, while Ancestry sells itself on liberal values, many white supremacists have gone after ‘scientific’ confirmation for their sense of superiority, and consumer DNA testing has given them the answers they sought (though, often, not the answers they wanted.)

As consumer genetics gives new life to the assumptions of an earlier era of race science, the Spotify-Ancestry collaboration is at once a silly marketing trick, and a tie, whether witting or unwitting, to centuries of hereditarian thought. It reminds us that, where musical eugenics afforded a legitimizing glow to the violence of forced sterilization, the Immigration Acts, and Jim Crow, Spotify and Ancestry can be seen as sweeteners to modern-day race science:  to DNA tests at the border, to algorithmic policing, and to “race realists” in political office. That the appeal of these abstractions—from music to metadata, from culture to geography, from human beings to genetic material—is also their danger. And finally, that if we really want to hear our heritage, listening, rather than spitting in a bottle, might be the best place to start.

Featured Image:  “DNA MUSIC” Creative Commons Attribution-Share Alike 4.0 International

Alexander Cowan is a PhD candidate in Historical Musicology at Harvard University. He holds an MMus from King’s College, London, and a BA in Music from the University of Oxford. His dissertation, “Unsound: A Cultural History of Music and Eugenics,” explores how ideas about music and musicality were weaponized in British and US-American eugenics movements in the first half of the twentieth century, and how ideas from this period survive in both modern music science, and the rhetoric of the contemporary far right.

tape reelREWIND! . . .If you liked this post, you may also dig:

Hearing Eugenics–Vibrant Lives

In Search of Politics Itself, or What We Mean When We Say Music (and Music Writing) is “Too Political”–Elizabeth Newton

Poptimism and Popular Feminism–Robin James

Straight Leanin’: Sounding Black Life at the Intersection of Hip-hop and Big Pharma–Kemi Adeyemi

Publishing an Open Access Textbook on Environmental Sciences: Conservation Biology in Sub-Saharan Africa

Publishing an Open Access Textbook on Environmental Sciences: Conservation Biology in Sub-Saharan Africa

By Richard B. Primack and John W. Wilson.

Publishing an Open Access Textbook on Environmental Sciences: Conservation Biology in Sub-Saharan Africa
The book contains hundreds of photographs from Africa, such as this cheetah family, which are published as CC BY 4.0. Photograph by Markus Lilje, CC BY 4.0.

For the past six years we have been working to produce the first conservation biology textbook dedicated entirely to an African audience. The need for this work has never been more pressing. Africa has some of the world’s fastest growing human populations. This growth, together with a much-needed push for economic development, exerts unsustainable pressure on the region’s rich and unique biological treasures. Consequently, Africa is rapidly losing its natural heritage; without action, there is a real chance that the world’s children may never have the opportunity to see gorillas, rhinoceros, or elephants in the wild.

To address this alarming loss of Africa’s natural heritage, there is an urgent need to produce the next cohort of well-trained conservation, wildlife, and environmental leaders, able to confront challenges head-on. To facilitate this capacity building, we aimed to write a comprehensive textbook, designed for conservation biology courses across Sub-Saharan Africa, and as a supplemental text for related courses in ecology, environmental sciences, and wildlife management. Our aim was to strike a balance between theory, empirical data, and practical guidelines to make the book a valuable resource not only for students, but also for conservation professionals working in the region.

Publishing an Open Access Textbook on Environmental Sciences: Conservation Biology in Sub-Saharan Africa
To help in its teaching mission, the book provides numerous examples of conservation in action, such as this biologist from Guinea instructing citizen scientists on wildlife monitoring. Photograph by Guinea Ecology, CC BY 4.0. 

But we faced a major challenge: how can we effectively reach our target audience, even in the most isolated corners of Sub-Saharan Africa? Print publishers would be unable to produce and distribute this type of book across dozens of African countries. At 694 pages and with hundreds of color photos, most African students would also not be able to buy such a substantial book, so the project would neither be profitable nor feasible for a print publisher.  For this reason, we concluded that the textbook would reach the widest audience and have the greatest impact if it was produced under an Open Access license, which guarantees free distribution rights to anyone who may benefit from the work.

The textbook, eventually published under a Creative Commons (CC BY) license by Open Book Publishers, was a resounding success. As evidence of how much the work was needed, the book was viewed nearly 7,000 times within six months of publication.  There is no question: this remarkable reach, and the impact this book is having in making conservation training more accessible, could only have been achieved through Open Access publishing.

Conservation Biology for Sub-Saharan Africa was recently published as Open Access. Click here to read and download this title for free. You can also follow @ConsBioAfrica or join the textbook discussion forum here.

Why is open education resource creation, management and publishing important? Reflections for Open Book Publishers on Open Education Week 2020

Why is open education resource creation, management and publishing important?                                                
  Reflections for Open Book Publishers on Open Education Week 2020

Photo by Leyre Labarga on Unsplash

The suggested subject for this reflection was "why it is important to publish educational resources in Open Access," but I'm not happy with that emphasis on the end point. It's important that learning resources are not static once published, rather on publication a resource enters an iterative cycle of revision-reuse-evaluation-reflection.

My starting point for sharing educational resources was that high quality teaching and learning resources are difficult and time consuming to create; and like anything that is difficult and time consuming they are costly in terms of money or, more frequently, unpaid effort. To me, OER made sense as a means of sharing the effort of creating learning resources, dividing work between partners with different skills and viewpoints. It also made sense to get input from a wider range of contributors in such a way that the result is of use to a wider audience, providing a greater return for this effort.

This view has consequences not just for publishing, but for authoring and resource management during an extended lifecycle. Key among these are the need for collaborative authoring processes, tools that support these processes, and publication in formats that are interoperable with these tools. So, it is important to publish educational resources in open access because this supports a sustainable approach to the creation and widespread use of quality educational resources.

Phil Barker, Cetis LLP, http://people.pjjk.net/phil. Contributor of Open Education: International Perspectives in Higher Education, edited by Patrick Blessinger and TJ Bliss. To read/download this title or read Phil Barker's co-authored chapter Technology Strategies for Open Educational Resource Dissemination', visit https://www.openbookpublishers.com/product/531


The first problem with traditional textbooks is that they are too heavy. They are lumps in your school bag and impossible to read on your phone. The second problem is that they are too expensive. Thirty, forty pounds or more. The third problem is that traditional textbooks turn our common knowledge into private profits. They tell you that "2+2=4" and charge money for it. Open source books, on the other hand, are light, free and for everyone's benefit. Insist that your teachers use them.

Professor Erik Ringmar, author of History of International Relations: A Non-European Perspective. Click here to read and/or download this book for free and visit http://irhistory.info/ to access the author's research blog,


After publishing a textbook with Open Book Publishers, I have become even more convinced that educational materials, more than any other text, need to be made available to everyone as Open Access resources.

There is still a certain stigma associated with “free” books and open publication, as if texts that are made freely available should have less value or quality than the books printed by large publishing houses for a profit. But Open Book Publishers’ publication model, which is founded, as I have personally witnessed, on a very rigorous review and a highly professional editorial process, shows that it is possible to offer high quality textbooks and other educational materials at no cost for students.

Being a strong believer in the need for society to provide free and open education to everyone, not just in terms of access to learning materials, but also to classes, teachers and institutional support, I was naturally inclined to distribute my textbook under an Open Access license. My very positive experience working with the editorial team at Open Book Publishers has reaffirmed my commitment to this model. I will certainly try to make freely accessible any other educational materials that I produce in the future.

I just wish that educational authorities and institutions, as well as private donors, would increase their support for small but very professional editorial projects like OPB. It would be a way to ensure that good education is not the privilege of wealth, but the gift of intellectual curiosity.


Professor Ignasi Ribó, author of Prose Fiction: An Introduction to the Semiotics of Narrative. Click here to read and/or download this book for free. You can also watch an interview with the author in which he discusses the background of this project at https://www.youtube.com/watch?v=nyGidolHPWg&feature=emb_title.


The development of Open Educational Resources, which includes Open Access publications, is growing in popularity as more faculty realize the benefits in not only the use of OER in their own teaching, but also the benefits of developing and sharing of these resources for peers. In addition to the obvious cost savings to students when incorporating OERs, revising and developing OERs allows a faculty member to create a highly targeted resource that speaks specifically to the content they want students to have that is relevant, flexible, and adaptable.

The biggest hurdle we have to face in the Open Education area is the time and resources it takes to develop and distribute these resources. As Open Education is a newer trend in the field and divergences from the typical pathways of faculty publishing and presenting, some faculty and institutions have been slow to support and recognize Open Education as a viable and rigorous form of academic publishing.

If many faculty in a field all started developing complementary resources and sharing them, then it could drastically reduce the current needed individual investment for each individual faculty member. We need to create a culture shift in education focused on openness and sharing of resources in order to distribute the workload of this OER production and open published materials across many people in the field thereby creating a diverse and rich network of easily adaptable content that is relevant, targeted, and best of affordable to the students who need it!.

Nathan Whitley-Grassi, Ph.D, Associate Director for Educational Technologies, State University of New York, Empire State College. Contributor of Open Education: International Perspectives in Higher Education, edited by Patrick Blessinger and TJ Bliss. To read/download this title or read Dr Whitley-Grassi's chapter 'Expanding Access to Science Field-Based Research Techniques for Students at a Distance through Open Educational Resources', visit https://www.openbookpublishers.com/product/531. https://www.linkedin.com/in/nathanwhitleygrassi/


The UN’s Sustainable Development Goal 4 sets out several  ambitions. One is to extend learning opportunities to everyone at all  levels of education. The challenge of educating everyone appropriately  at all levels is massive. Sir John Daniel, former President and CEO of  the Commonwealth of Learning, calculated that to bring all countries up  to the higher-education participation levels of the best-performing  countries would require opening a new university  campus every day for the foreseeable future. The economic and social   impacts of doing that alone appear enormous, let alone deal with  schools and lifelong learning. A considerable expansion of open  education in the form of open educational resources with  an open license attached seems an obvious way to limit the economic  impacts, but the social impacts depend on how inclusive and accessible  the educational opportunities are. Unfortunately, openness and  digitalisation do not, in themselves, make it easier to  access, afford or find the educational opportunities that open  education can offer; these aspects all depend on who is deciding what is  open, when and for whom, whether an open license is used, and how  digital technologies and infrastructure are implemented  and managed. Several issues can arise for potential learners: local  bandwidth may mean the resources are difficult to study; the costs of  using Internet or mobile data networks may be prohibitive; the materials  may not be formatted for the learner’s digital  device; or the resources may be in a learner’s second or third  language, to name but a few challenges. Eliminating inequalities in  access to education requires systemic changes in how education is  organised at all levels more than systematic changes in the  way we currently do things. Thus the open access publication of  educational resources is a necessary but not sufficient response to  extend learning opportunities to all.

Andy Lane, Professor of Environmental Systems, The Open University. Contributor of Open Education: International Perspectives in Higher Education, edited by Patrick Blessinger and TJ Bliss. To read/download this title or read Andy Lane's chapter 'Emancipation through Open Education: Rhetoric or Reality?', visit https://www.openbookpublishers.com/product/531.

The Environmental Impact of Open Book Publishers

The Environmental Impact of Open Book Publishers

At Open Book Publishers, we are working to minimise our environmental impact. In 2020 we have undertaken to shrink our carbon footprint using various methods, including not travelling by plane and making our offices more energy efficient. (See here for more information.)

All the paperback and hardback copies of our books are printed using Print on Demand (PoD) services. This cuts down on energy and material waste since books are only printed after a purchase has been made—there is no excess stock.

Lightning Source UK Ltd runs our PoD printing. They print and ship from the UK, US and Australia, and they also have printer partners who print for them globally. Individual copies of our books might be printed by Lightning Source or by one of these printer partners.

Information about Lightning Source UK Ltd's environmental commitments and their certifications can be found here: https://www.ingramspark.com/environmental-responsibility

Open Book Publishers would like to be able to give more concrete information about the environmental impact of our book production, and we remain in dialogue with Lightning Source about this issue.

Is prestige a problem? Considering the usefulness of prestige in academic book publishing

Is prestige a problem? Considering the usefulness of prestige in academic book publishing

This is the full draft of an article published in Research Europe's 05 March 2020 issue. The edited article is free to read on Research Europe's website, and they kindly agreed that we could post the full version here under our blog's CC BY licence.

At the 14th Munin Conference in November last year, prestige was raised on multiple occasions as a drag on progress in Open Access (OA) publishing. Traditional legacy publishing—the model by which academic books and journals are published in print or closed-access digital formats at high cost and low volume—plainly does not take full advantage of digital developments that enable us to distribute content much more efficiently and effectively to many more readers. But authors, by and large, value these more prestigious legacy outlets extremely highly, particularly when it comes to books–those presses with the longest histories, the most stellar backlists, and the highest rejection rates.

Although such publishers have made gestures towards Open Access, they tend to be highly conservative in their approach: slow to adjust their business and production models to embrace OA, offering only a limited version (e.g. a PDF of a book designed for print), and imposing exorbitant charges on authors (prices vary, but between £10,000-£15,000 is common for an academic monograph to be published Open Access under a Book Processing Charge model).

While authors continue to flock to the legacy presses, there is little incentive for them to change their approach, regardless of its effectiveness.

There is, though, an alternative ecosystem of non-profit, scholar-led and university presses who have embraced Open Access for books (in fact they are often born-OA publishers). Adema & Stone (2017) note the existence of four Open Access university presses and thirteen scholar-led Open Access publishers operating in the UK or publishing for the UK market.

These are presses invested in getting high-quality research to as many readers as possible, and in developing business models such that cost is not a barrier either for readers to read, or for authors to publish. Examples include Open Book Publishers and punctum books, who have a growing reputation for innovative processes and publications (whether in terms of business model, content, or format), high standards in research and production quality, and a focus on the wide dissemination of academic work in the service of the scholarly community.

This non-profit and collaborative approach has led easily to cooperation, and therefore to the creation of partnerships like ScholarLed—a consortium of five academic-led, non-profit OA book publishers developing powerful ways for small-scale OA presses to flourish—and the COPIM project, a major £3.5 million international partnership of researchers, libraries, the ScholarLed presses and infrastructure providers, which is building  open, non-profit, community-governed infrastructure that can support a wide range of publishers of different sizes to create a resilient and diverse ecosystem for OA book publishing.

Notwithstanding these encouraging developments, publisher prestige continues to act as a powerful restriction on author choice. Many researchers who might otherwise wish to publish with an OA press think twice because of a concern that they or their work will be judged negatively in consequence—that their CV won’t look as gilded in comparison to colleagues; that they will be overlooked for prizes and promotion.

What do we mean when we talk about prestige?

There are several threads woven through the concept of prestige. One is quality: a prestigious publisher will have published research of distinction in the past, and their books might have high production values. Another is reputation: they are known for their previous good work, and they have attracted more talented authors as a result. Prestigious presses are often attached to renowned universities, with acclaimed academics participating in their peer-review processes. Their reputation has grown to such a degree that they are taken as a byword for excellence.

The problem with prestige, however, is that it has the capacity to overwhelm continued critical engagement. Prestige is a kind of currency, with transferable value for others—for those authors, say, whose work is published by a prestigious press and therefore judged more favourably in a competitive research environment.

It also sets the conditions of its own value. A press might have a record of past distinction, but is it continuing to maintain that record in the present—or has it, by virtue of the prestigious reputation it has acquired, created the conditions for its activities to be seen as the best or only proper way of proceeding?

Prestige is necessarily restrictive; it dilutes as it is shared. In signalling to the overburdened academic community what is supposedly the ‘best’ work in the field, it performs a winnowing function—but in a research environment in which more and more monographs are being published (indeed in an environment that incentivises this activity, thanks to the emphasis universities place on the monograph when hiring) how much work is not being given its due because it is published by a less prestigious press, or, worse, not published at all?

Of equal concern, particularly given that most legacy publishers are so unsatisfactory when it comes to Open Access and other innovations in publishing, is the imposition of artificial scarcity when it comes to the author’s choice of publisher: I feel I must publish with a more prestigious outlet, even if my work will be much less widely read or appropriately presented. There is a kind of ‘Matthew effect’ in action as authors choose the more prestigious press, even if it dissatisfies them.

The veneration of prestige in academic publishing therefore limits the choice of authors and the accessibility of research; in signalling that a publisher will be valued today on what it achieved in the past, it deadens innovation. What might we replace it with?

Borrowing the term from Moore, Neylon, Eve, O’Donnell and Pattinson (2017) in their discussion of the fetishisation of excellence in higher education, I wonder if we might do better to think about the ‘soundness’ of a publisher—to focus on practices, rather than prestige. How is research chosen for publication by the press? What are its editorial and production standards? How does it engage with new developments in book production? How widely are its works disseminated, and is its business model sustained by hefty charges levied on authors or readers?

These are all valid ways to begin to think about the qualities of a press—although each one might be contentious to evaluate. But the point is precisely that they should be up for debate—that we are critically engaging with the terms on which research is distributed and assessed, rather than embracing the inertia engendered by a reliance on prestige.

Open education is key to the future of learning

Open education is key to the future of learning

Photo by Nathan Dumlao on Unsplash

Education is the key to human development and social mobility. Education is also the engine that drives economic growth and social development. Thus, education is essential for human progress. Education, and the knowledge that it produces, builds on itself from one generation to the next, making the human knowledge base ever-expanding and self-reinforcing. However, fast changes in technology have created increasingly complex and uncertain social orders. All these factors, in turn, have put a premium on lifelong and lifewide learning and on the ability to respond to fast-moving economic and social conditions, such as rapidly changing career fields and labor markets.

Because lifelong and lifewide learning have become a reality of the modern era, news types of education have become available in recent decades, including open education. Open education operates along a spectrum with open universities (i.e., formal learning) at one end of the spectrum and open courseware (i.e., semiformal learning) in the middle of the spectrum and open education materials at the other end of the spectrum (non-formal learning). Examples of open education include Open University in Great Britain, MIT’s OpenCourseWare, and Khan Academy. Thus, today there exists many types of open education to address the diverse needs of learners.

Open education platforms and practices are based on a philosophy that every person has a right to learn throughout their lives. The driving force behind open educational practices is the democratization of knowledge, which, in turn, is based on the principles of equity and inclusion.

Thus, open education is based on the notion that educational materials should be freely accessible to the public without onerous copyright or reuse restrictions. These ideas are discussed in the book, Open Education: International Perspectives in Higher Education.

Open education provides learning beyond that provided by traditional time and place-based education systems. Because digital technology helps to eliminate time and place constraints, e-learning and distance learning is typically the provisioning mode of choice for open education. The key point is that educational systems should be more flexible in how they address the needs of learners. Since all learners have a right to learn during all phases of their lives, learning in the modern era needs to be flexible, accessible, and personalized.

Professor Blessinger's two previous posts on open education, visit: 'Enabling lifelong learning through open education' https://blogs.openbookpublishers.com/enabling-lifelong-learning-through-open-education/ & 'Strengthening Democracy Through Open Education': https://blogs.openbookpublishers.com/strengthening-democracy-through-open-education/


‘The Tiberian pronunciation tradition of Biblical Hebrew’

'The Tiberian pronunciation tradition of Biblical Hebrew'


The term ‘Biblical Hebrew’ is generally used to refer to the form of the  language that appears in the printed editions of the Hebrew Bible and  it is this form that it is presented to students in grammatical  textbooks and reference grammars. The form of Biblical Hebrew that is  presented in printed editions, with vocalization and accent signs, has  its origin in medieval manuscripts of the Bible. The vocalization and  accent signs are notation systems that were created in Tiberias in the  early Islamic period by scholars known as the Tiberian Masoretes. The  text of the Bible that appears in the medieval Tiberian manuscripts and  has been reproduced in modern printed editions is known as the Tiberian  Masoretic Text or simply the Masoretic Text.

The opening sections of modern textbooks and grammars describe the  pronunciation of the consonants and the vocalization signs in a  matter-of-fact way. The grammatical textbooks and reference grammars in  use today are heirs to centuries of tradition of grammatical works on  Biblical Hebrew in Europe, which can be traced back to the Middle Ages.  The paradox is that this European tradition of Biblical Hebrew grammar,  even in its earliest stages in eleventh-century Spain, did not have  direct access to the way the Tiberian Masoretes were pronouncing  Biblical Hebrew. The descriptions of the pronun¬ciation that we find in  textbooks and grammars, therefore, do not correspond to the  pronunciation of the Tiberian Masoretes, neither their pronunciation of  the consonants nor their pronun¬ciation of the vowels, which the  vocalization sign system originally represented. Rather, they are  descriptions of other traditions of pronouncing Hebrew, which originate  in traditions existing in Jewish communities, academic traditions of  Christian Hebraists, or a combination of the two.

In the last few decades, research of a variety of manuscript sources  from the medieval Middle East, some of them only recently discovered,  has made it possible to reconstruct with considerable accuracy the  pronunciation of the Tiberian Masoretes, which has come to be known as  the ‘Tiberian pronunciation tradition’ or the ‘Tiberian reading  tradition’. It has emerged from this research that the pronunciation of  the Tiberian Masoretes differed in numerous ways from the pronunciation  of Biblical Hebrew that is described in modern textbooks and reference  grammars.

In this book, my intention is to present the current state of knowledge of the Tiberian pronunciation tradition of Biblical Hebrew based on the extant medieval sources. It is hoped that this will help to break the mould of current grammatical descriptions of Biblical Hebrew and form a bridge between modern traditions of grammar and the school of the Masoretes of Tiberias.

The book is divided into two volumes. The first volume contains a description of the Tiberian pronunciation. The final chapter includes reconstructed phonetic transcriptions of sample passages from the Hebrew Bible, with links to oral performances of these by Alex Foreman. The second volume presents a critical edition and English translation of the sections on consonants and vowels in the Judaeo-Arabic Masoretic treatise Hidāyat al-Qāriʾ (‘Guide for the Reader’) by the Karaite grammarian ʾAbū al-Faraj Hārūn (eleventh century C.E.). Hidāyat al-Qāriʾ is one of the key medieval sources for our knowledge of the Tiberian pronunciation tradition and constant reference is made to it in the various chapters of this book. Since no complete edition and English translation of the sections on the consonants and vowels so far exists, it was decided to prepare such an edition and translation as a complement to the descriptive and analytical chapters of volume one.

You can find out more about the Cambridge Semitic Languages and Cultures Series and/or download and read these volumes for free at https://www.openbookpublishers.com/section/107/1. Click here to purchase the two volumes of The Tiberian Pronunciation Tradition of Biblical Hebrew at a discounted rate.

Let’s Detoxify Ourselves-Knowledge for the Future (On Politics of European Education)

Dear friends,

I’d like to share with you the English translation of our position paper Disintossichiamoci–Sapere per il Futuro, published a week ago on the largest academic discussion website in Italy, that quikly exceeded 1000 subscriptions from all areas of the country and from all disciplines. Also some foreign colleagues gave us their valuable support. It is very encouraging and allows us to aim high.

The biennial summit of European education ministers, the 2020 EHEA Ministerial Conference, will be held in Rome in June, a meeting organized within the framework of the Bologna Process. During those days – 23-25 June 2020 – we want to organize a counter-summit in Rome: a meeting that brings together different European opposition movements of professors and researchers, to ask – together with the students as well – a profound rethinking of knowledge policies at the international level. We are convinced that the supranational framework is decisive, as shown by the many affinities between the particular situations in which everyone of us is involved. We want to work together, apart from the individual differences, with the aim of building a strong and clear alternative to the idea of knowledge the current policies are enforcing in Europe and beyond.

Help us to get in touch with others. It would be very important to identify representatives from the various organizations, with whom work operationally in network for setting up our June counter-summit.

Thank you for your help and hope to hear from you soon,

Valeria Pinto  (sapereperilfuturo@gmail.com)

Let’s Detoxify Ourselves-Knowledge for the Future

“Economics are the methods. The object is to change the soul”. Margaret Thatcher’s formula sums up well the process that characterized the policies of knowledge, education and research (but not only that) in the last decades.

The economic method, shortage as a normal condition, at or below the survival limit, is visible to everyone. Also clearly visible, together with the financial one, is the bureaucratic strangulation. Less visible is the target. The change of our soul is so deep that we do not even notice anymore the destruction that has taken place around us and through us: the paradox of the end – inside the “knowledge society” – of a world dedicated to the things of knowledge. Our very hearing has become accustomed to a programmatic linguistic devastation, where an impoverished technical-managerial and bureaucratic jargon reiterates expressions having a precise operational value, which however seems to be difficult to grasp: quality improvement, excellence, competence, transparency, research products, teaching provision… And autonomy, or – to evoke Thomas Piketty’s words – the imposture that initiated the process of destruction of the European university model. A destruction that has taken as a rhetorical pretext some faults – real and not – of the old university, but of course without remedying them, because that was not its goal.

Thirty years after the introduction of “autonomy”, twenty after the Bologna process, ten after the “Gelmini Act” (in Italy), the critical literature about this destruction is boundless. It is a fact, although making it explicit seems a taboo, that research and teaching are no longer free. Research, subjected to senseless pressure that pushes us to “produce” more every year, is, every time more than just before (in Italy VQR, ASN etc.), in the grip of a real bubble of titles, which transforms an already fatal “publish or perish” into a “rubbish or perish”. At the same time, pressure is exerted to “deliver” an education entirely modeled on the demands of the productive world. The modernization that programmatically tore the university away from every “ivory tower” – making it a “responsive”, “service university” – meant nothing but a way, the “third way”, towards the world of private interests. Emptied of their value, education and research are evaluated, that is to say “valued”, through the market and quasi-market of evaluation, which, in its best institutional capacity, serves only to “favor (…) the effect of social control and the development of positive market logic “(CRUI 2001).

Due to the imposition of this market logic, the freedom of research and teaching – albeit protected by art. 33 of the Italian Constitution – is now reduced to freedom of enterprise, submitted to a regime of production of useful knowledge (useful above all to increase private profit), which controls the ways, times and places of this production. An authoritarian management expropriates researchers and scholars of their own faculty of judgment. Criteria deprived from internal justification, as numbers and measures that, as everyone knows, have no scientific basis and do not guarantee in any respect the value and quality of knowledge, are smuggled as objective ones. Pre-defining percentages of excellence and unacceptability, dividing with medians or prescribing thresholds, sorting in rankings, dividing magazines into ratings, all of this, together with the most vexatious control practices in the form of certifications, accreditations, reports, reviews, etc., only has one function: forcing competition of individuals, groups or institutions within the only reality to which today the right to establish values is given, that is the market, in this case the global market of education and research, which is an entirely recent invention.

As a matter of fact, where traditionally the markets did not exist (education and research, but also health, safety and so on) the imperative was to create them or simulate their existence. The logic of the competitive market has established itself as a real ethical command, opposing which has meant, for the few who have tried it, having to defend themselves from accusations of inefficiency, irresponsibility, waste of public money, defense of corporative and caste privileges. Far from the triumph of laissez faire: a police “evaluative state” has worked to ensure that this logic is internalized in normal study and research practices, operating a real de-professionalisation, which has transformed scholars engaged in their research into compliant entrepreneurial researchers, obedient to the diktats of the corporate university. To gratify them they are offered an economic and existential precariousness that goes under the name of excellence: the functional framework to a “competitive Darwinism” that is explicitly theorized and, also thanks to the moral coverage offered by the ideology of merit, forcedly made normal.

Many now believe that this knowledge management model is toxic and unsustainable in the long term. The performance measurement and reward evaluation devices convert scientific research (asking in order to know) into the search for competitive advantages (asking in order to obtain), thus jeopardizing the meaning and role of knowledge for society. More and more today we write and do research to reach a productivity threshold rather than to add knowledge to humanity: “never before in the history of humanity have so many written so much despite having so little to say to so few” ( Alvesson et al., 2017). In this way, research is fatally condemned to irrelevance, dispelling the social appreciation it has enjoyed so far and generating a deep crisis of trust. The time has come for radical change if we want to avoid the implosion of the knowledge system as a whole. The bureaucratization of research and the managerialization of higher education risk becoming the Chernobyl of our model of social organization.

What is needed today is to reaffirm the principles that protect the right of all of society to have free knowledge, teaching, research – to protect, that is, the very substance of which a democracy is made – and for this reason to protect those who dedicate themselves to knowledge. A standpoint is needed to bring together what resists as a critical force, as the ability to discriminate, distinguish what cannot be held together: sharing and excellence, freedom of research and new evaluation, good higher education and rapid supply of low- cost workforce, free access to knowledge and market monopolies.

In this direction some stages are outlined. The first one is an assessment of the actual existence and consistency of our field. A project cannot move forward unless a minimum mass of people willing to commit to it is reached. If there is an adequate preliminary adhesion – let’s say 100 people in symbolic terms – we will organize a meeting to discuss alternative policies about evaluation, times and forms of knowledge production, recruitment and organization. Looking ahead, we will carry out an initiative in June, at the same time as the next ministerial conference of the Bologna process, which will be held in Rome this year, with the aim of demanding – in conjunction with other European movements of researchers and scholars – a radical rethinking of knowledge policies.

Valeria Pinto – Davide Borrelli – Maria Chiara Pievatolo – Federico Bertoni + 1000

If you want to add your name to the declaration, write to sapereperilfuturo@gmail.com, indicating your name and institution.

Pit Schultz: Europe is not a Data Grossraum

Europe is Not a Data Grossraum by Pit Schultz (@pitsch)

Apart from outdated spatial metaphors, the sectorization of data economies points to the risk of the emergence of monopoly platforms. There is a danger of  “natural” sectorwize monopolies, which, thanks to telecommunication-driven 5G infrastructure (EdgeML, IoT), allows vertical integration and the centralization of value chains, bypassing the principles of net neutrality.

Instead of coordinating research and development, much redundancy in competition is created. A consistent renewal of Open Data guidelines in the area of algorithms, data structures, training data and publications is necessary, which learns from past mistakes. New suitable licensing models have been developed which help to prevent it being possible to create a direct data pipeline, e.g. from Wikidata to the Google Knowledge Graph, without any financial compensation. Anyway – where is the mention of Wikipedia as a European cost-effective counter model to Silicon Valley, with Diderot and d’Alembert in uncharted digital territory? The European Bertelsmann search engine and the digital library have all but disappeared into the Babylonian metadata jungle.

Data is the oil of the 21st century only in so far as it is better not to base an economy on it without being prepared for unpleasant side effects. The many cases of scraping, as well as the problem of patent trolls, show that today’s copyright law with absurdities such as ancillary copyright blocks all digital development. Only a radical open source and open standard strategy in the field of machine learning can give Europe a unique selling point. To understand data economics as a “win-win marketplace”, however, would be a mistake, because platform economics tends to be “The-Winner-Takes-it-all”. Amazon evaluates data internally to optimize brick & mortar logistics. Google makes its money not by selling data but by advertising, etc. etc. If data sales occur, as recently with Avast, then this is usually ethically questionable. What you can sell are complex machine learning-supported services or rather entire environments in which processes (constraints) can be abstracted and logistically optimized (such as the German logistics software company SAP).

The recently published EU policy paper A European Strategy for Data reads like a clueless manifesto. Not even the distinction between AI (AGI) and machine learning is made here, and generalities regarding image recognition and bias are served in a public-friendly way. Instead of questioning the ethics of the traditional concepts of ownership in the digital world, references are made to the protection of privacy. As you can imagine, private data sets are irrelevant for most training models (e.g. translation software such as deepl, used here). Somewhere in the margins, it is pointed out that reproducibility should be a criterion, which is equivalent to the disclosure of training data.

Rather than further promoting the spreadsheet mentality, bullshit bingo and McKinseyfication of European digital policy, it’s time to identify the structural principles that distinguish, and have made digital networks successful, compared to industrial and financial neo-liberal economic models. For example, there is no classic economic parameterization within source code production, the cyclical, iterative ‘agile’ management, but also the barter agreements of Internet providers, as well as the absence of a money economy within transnational platform monopolies.

The fact that Mark Zuckerberg’s internal planned economy is based on insufficient concepts and ‘Californian dreams’ such as speculation on VR and Augmented Reality as well as home appliances, does not change the decisive competitive advantage of using entirely privatized user data sets to extract added value in machine learning. This is precisely where regulation should step in. The proposal of interoperability to move private user data to imaginary competing platforms, or to make them transparent for a small subset of data structures, so that we can communicate across all platforms, does little to change the respective monopoly position in Deep Learning concerning the complexity and depth of the already accumulated data sets.

Instead of promoting a ‘platforming’ of sectors by top-down sectorisation into “data rooms”, an approach for Europe would make sense that deals with counter- and successor models of these quasi-natural monopoly structures. Data space sectoring according to the “Airbus” and “Transrapid” models should give way to an approach in which complete human-generated content models, such as Wikipedia, are to be positioned against American platform monopolies. Their market value has not even been measurable to any extent, yet thanks to the creative commons licence without any licence fees, it has long since been exploited by Google & Co.

Just as the portal model disappeared in the dotcom crisis, it is quite conceivable that the era of platforms, i.e. privatized public spaces on the Internet, will soon be a thing of the past if Europe focuses on its core competence of technological innovation through regulation. That administrative power is also a sleeping giant in networks, embodied by the Federal Network Agency, which regulates the large-scale infrastructure of transport, water, gas, electricity and information. It can be observed that the spatial-physical property is, more or less, bridged by the network, depending on the type of network, and made to disappear. Paul Virilio already pointed at this disappearance of space. It has recently been geopolitically reterritorialized by theorists that follow in the footsteps of right-wing conservative thinkers such as Carl Schmitt. This intellectual trend corresponds with the populist, separatist and reactionary local movements, which, as an anti-globalization movement of the right, retroactively divides and re-nationalizes international network structures, coming from the application level.

The question concerning “AI”

For the time being, the term is outdated because it refers to Strong AI (Minsky et al), i.e. top-down ontological and rule-based. Artificial General Intelligence would be the better term, distinguished from machine learning or deep learning.

Positioned against the then-dominant term of cybernetics, “AI” comes from the early days of computers, which also spoke of ‘electron brains’ and the ‘general problem solver’. It is often overlooked that the origin of neural networks derives from analogue computing (perceptron) and brings certain features from this technical branch of development. Following Friedrich Kittler, machine learning has not been possible without a hardware (r)evolution, and this consisted in a massive parallelization through the availability of graphics cards, which today are equipped with thousands of mini-CPU cycles (according to the von Neumann architecture), running in parallel. This parallelism allows the operational algorithmic complexity to map multidimensional non-Euclidean vector spaces that help to statistically reduce the parameters of a complex reality, step by step, which in turn has little to do with space and time metaphors of classical media theory, or with the search for the mind or soul.

It is interesting to note that, again for simplicity’s sake, we are working with layers, i.e. two dimensional matrices that are related to each other and perform billions of matrix operations, layer by layer, based on thresholds that are in relational ways dependent on each other. This threshold logic is based on fuzzy logic, in contrast to Boolean algebra. Each layer in deep learning takes over certain statistical tasks of complexity reduction, training gigantic big data stocks are recursively averaged for their redundancies and differences. The “machine”, in a social, unconscious, linguistic sense, i.e. the redundancies of the production of difference, becomes extractable and repeatable—within limits.

The implications are obviously explosive in terms of political economy. In absence of a universal theory,  machine learning, empirically and iteratively, develops innumerable recipes concerning the combinatoric architecture of these layers, some speak of an alchemist approach, or the black box problem because in trained machine learning models algorithms, data and data structures fuse to an impenetrable amalgam. Therefore many try to introduce reversibility and control structures by some additional effort, the easiest way to debug ML would be to provide the complete training data in an ethical sense. This calls into question Big Data’s walled garden system politically and ethically, because at the very least scientific auditing, institutional access, etc. must be provided.

The geopolitical arms race for machine learning is largely uncoordinated, so many are trying to reinvent the wheel at the same time with a lot of money. In some cases, competition has taken advantage of the network effects of the technology itself, which can be seen in the rise of Google (using open-source strategies) and the triumphant advance of Linux/open source across the cloud infrastructure. The upcoming revolution of machine learning concerning domain-specific singularity moments can only be achieved if models, training data, algorithms and documentation are published under an open science / open data policy. Then it is also possible to avoid the waste of resources of competition and to better coordinate research and development. The competitor who implements such an Open Data strategy will have a strategic advantage. There is still no Richard Stallman of machine learning.

All the speculations about consciousness, from homunculi, Rokko’s basilisk to uploading the mind, are as amusing as d’Alembert’s dream, and probably part of a narrative that will soon fade away. In that sense, it would be good to stick to Michel Foucault’s anti-humanism and target the techniques of power itself instead of indulging in a geological species-type narcissism, which presents itself as part of the Anthropocene discourse.

A sci-fi scenario that I prefer is the following. When a crisis occurs in the financial sector, the ML-driven prediction algorithms will create a singularity moment that eventually develops a recursive local autonomy. A more or less irrational herd behaviour of the actors is then deliberately exploited and generated, comparable to malware, to not only pull the financial system into the abyss but at the same time provides a price control system which from this point onwards has much more complex game-theoretical options than all the stock market players in the world together that follow mass psychological redundancies. It is then no longer assets but algorithmic complexity that puts the Invisible Hand in place, which is only an alias for the 1% that do not necessarily concentrate a large part of human intelligence in themselves.

When AI is warned against, in Davos, or by Peter Thiel, one could imagine scenarios in which the economic a priori is being replaced by a technical one, which is exactly what the technological determinism of the climate crisis points to. However, I do not see a linear determinism but rather a stochastic one that can be derived from the cycle processes of ecology, but also from the iterative development of technology in which social and technical aspects are two aspects of the same process and are only separated by our insufficient knowledge culture. For the conceivability of a process of socialization, i.e. socialization of certain technologies (following the model of the Norwegian oil industry), the social sciences, for example, lack technological and economic knowledge. Conversely, it can be argued that precisely this lack of information serves certain interests.

Intelligence in this context is always already artificially-technologically constructed, through the techniques of writing, recording, distribution and governability. An intelligence test constructs technological measurability of human performance in certain problems, based on cultural techniques such as written language, mathematics and statistics—with a multitude of underlying technical processes that make intelligence numerically countable. The same applies to university degrees or citation frequencies that meet the demand for standardization and quantifiability from the business sector. These performative tests measure cultural competence intending to reproduce certain abilities and hide what is called ‘social’ or ‘emotional’ intelligence. Rather than imagining an anthropomorphic intelligence of the technological, which has long since mechanized itself, it would be interesting to question the nature and quality of institutionalized, administrative intelligence that is utilized today through formalized processes and procedures.

Black Excellence on the Airwaves: Nora Holt and the American Negro Artist Program

Co-authored by Chelsea Daniel and Samantha Ege

Nora Holt (c.1885 – 1974) was a leading voice in Black America’s classical music scene. Her activities as a composer, performer, critic, commentator, and more shaped the Harlem Renaissance and its Chicago counterpart. As the fervor of the Black Renaissance progressed into the Civil Rights era, the energy that drove Black women’s activism sought greater outlets, one of which was the male-dominated world of radio. In radio, Holt continued her mission to broadcast Black excellence and there, her voice found greater power. 

Photograph of Nora Holt by Carl Van Vechten. Retrieved from the Library of Congress website.

As two classical pianists of African descent, we—Chelsea M. Daniel and Samantha Ege—were accustomed to Black women’s voices (as embodied in their compositions, performances, and criticism) being minimized, or muted all together in the Western art music narrative. Hearing Holt for the first time was powerful. 

Chelsea never knew that someone who looked like her existed in classical music, especially someone who had as great of an impact as Holt. Starting her piano studies at five, Chelsea was consistently the only Black female pianist in both her high school and college programs and she felt very isolated. It was nearly impossible for her to find any representation of Black female pianists and she was only encouraged to play a “standard” repertoire, which is dominated by white male composers. In her sophomore year of college, Chelsea took a music history course that taught her about diverse musicians who were omitted from her textbook. This discovery and a meaningful partnership with friends who shared similar experiences to her prompted the beginnings of numerous projects dedicated to showcasing music by diverse musicians, one being her junior degree recital where she programmed Sonata in E minor by the groundbreaking African-American composer Florence Price (1887 – 1953). With few performances of the piece existing online, Chelsea found Samantha’s recording and decided to reach out asking for guidance with the music. 

Samantha’s journey had been very similar to Chelsea’s, from looking to see some part of herself reflected in her studies to actively seeking a classical music history that celebrated the truth of its diversity. These similarities are what led them to Price, and eventually to this collaboration. At the time Chelsea reached out, Samantha was developing her research on Price’s network and its impact during the Chicago Black Renaissance. As Samantha began to piece Holt’s influence together, she couldn’t help but lament the radio silence around her life and legacy in the mainstream musical consciousness. The following tweet from the Red Bull Music Academy certainly rang true. Or so she thought.

Chelsea came across Holt’s literal voice during her internship at WQXR-Radio, to which Samantha’s reaction was: “Oh. My. God.” Chelsea had been trying to track down locations in New York where Price’s friend and collaborator composer-pianist Margaret Bonds (1913 – 1972) had performed. She was shocked to find a live recording of the artist on the American Negro Artist Program, something that does not even exist on YouTube. For us to hear Bonds on the piano and Holt’s actual voice, with the crisp mid-Atlantic elocution of a bygone era but a message of Black excellence for the ages, was to feel inspired, renewed, significant, and empowered (much like Holt’s listeners during her time). 

***

Born Lena Douglas in Kansas City to a minister father and musically-inclined mother, Holt’s music education began with playing organ in the church. Her musical pursuits aligned with the Talented Tenth thinking that W.E.B. Du Bois promoted around the turn of the century; it was believed that the highly educated top ten percent of the African-American population would uplift the race and that the study of classical music would provide a tool for mobility. However, Holt also lived beyond the limits of early twentieth-century respectability. As a young adult, she challenged the archetype of the modern day Black woman. By the time she had graduated from Kansas’s Western University, a prestigious HBCU, she had been married three times while still managing to graduate at the top of her class. 

In 1917, she married her fourth husband, George Holt, who was a rich hotel owner thirty years her senior. She changed her name to Nora Holt. Prior to meeting her husband, she moved to Chicago and earned her living as a cabaret performer while also actively performing, composing, and promoting classical music. In 1918, Holt became the first person African-American person in the United States to attain a Master of Music degree, which she earned at the Chicago Musical College. For her thesis composition, she presented an orchestral piece called Rhapsody on Negro Themes. The rhapsody was one of over 200 compositions that Holt wrote. Unfortunately, many of them were lost and have yet to be recovered. Holt had kept her manuscripts in storage during her time away in Europe, but returned to find that all had been stolen. The only surviving works were those that had appeared in her publication, Music and Poetry: the art song “The Sandman” and Negro Dance (1921) for solo piano.

 

Negro Dance with Samantha Ege, piano

Holt’s advocacy for Black artistic excellence became even more far-reaching with her work as a music critic for the Chicago Defender and the New York Amsterdam News. She reviewed all of the concerts with African-American performers and composers that she could find and made history as one of the first women to write for a major newspaper as the Chicago Defender’s first ever music critic. 

Holt moved into radio during the 1940s. Her American Negro Artist Program on WNYC began in 1945 and spanned almost a decade. It was upon this platform that she used her voice to further amplify the work of Black classical practitioners.

Chelsea M. Daniel, Butler School of Music, University of Texas at Austin. Image courtesy of the authors.

Chelsea found that the NYPR Archive Collections had published Holt’s 1953 American Negro Artist Program. This half an hour segment aired on February 12 at 5pm and was part of WNYC’s 14th annual American Music Festival. Though the scope of the festival was far broader, Holt’s program specifically highlighted the classical artistry of African-descended practitioners. February 12 fell in the middle of Negro History Week–the forerunner of today’s Black History Month–which New York Governor Thomas E. Dewey had proclaimed from February 8 to 15 (a span selected by the Week’s founder, Dr. Carter G. Woodson, in the 1920s to encompass the birthdays of Abraham Lincoln and Frederick Douglass).  With this program, Holt led her listeners through the multifarious layers of Black diasporic representation.

Samantha Ege, Preston Bradley Hall, Chicago Cultural Center. Images courtesy of the authors.

February 12 was also the commencement date of the festival, which was first announced in early February, in 1940. WNYC planned to broadcast an all-American series of concerts (forty in total) that would begin on February 12 and end on February 22, as marked by the dates of Abraham Lincoln’s and George Washington’s birthdays, respectively. Morris. S. Novik, WNYC director, told the New York Times (February 3, 1940) that the purpose of the festival was two-fold. He elaborated:

One purpose is to build the municipal radio station into an even greater force in the cultural life of the community, and the second is to promote the cause of good American music. American broadcasters have done a splendid job in developing appreciation of classical music. Radio must do still another important job by focusing attention on American music, and by demonstrating that Americans have written good–even great music.

The American Music Festival was the first of its kind to promote music that encompassed the nation’s musical past and present on such a scale, and with such stylistic variety. According to Novik, no other radio station had attempted to broadcast such a wide cross-section of American music with the same grand vision that he had. The New York Times reported on just how extensive this cross-section was (February 12, 1940):

 The concerts will cover nearly all types of American composition. Simple ballads which the pioneer sang as he plodded his way Westward will be included, along with the professional orchestral works of today. Spirituals and blues, indigenous to American soil, will vie with compositions that incorporate the latest innovations. All types of compositions: mountain songs, barber-shop ballads, vaudeville melodies, marches and the more serious forms of composition which make up the musical life of America will be represented.

The festival offers an affirmative answer to the question, “Do we have American music?”

Holt’s program not only evidenced a resounding “yes,” it presented a pan-diasporic purview that affirmed the socio-sonic pluralities of Black artistry. Samantha uses the term “socio-sonic pluralities” to ground the musical developments of Black cultural creators in their environment and to recognize how various social conditions can shape artistic expression. She identifies this as a central component in Holt’s 1953 American Negro Artist Program, particularly as the program went beyond the United States to embrace the Americas. With composers whose backgrounds encompassed Canada (R. Nathaniel Dett) and St. Kitts (Edward Margetson) and musical influences that merged different diasporic folk traditions with Romantic, neo-classicist, modernist, and Black Renaissance aesthetics, the American Negro Artist Program celebrated the interconnected, yet also distinct audiovisual histories of the African diaspora.

Program:

 

“The Breadth of a Rose”

William Grant Still, composer

Viola John, contralto and Margaret Bonds, piano

 

“I want Jesus to Walk With Me”

Negro Spiritual arranged by Edward Boatner

Viola John, contralto and Margaret Bonds, piano

 

“His Song” and “Juba Dance” from In the Bottoms

  1. Nathaniel Dett, composer

Una Hadley, piano

 

“One” and “Genius Child,” based on poems by Langston Hughes

Edward Lee Tyler, composer

Edward Lee Tyler, bass-baritone and Norma Holmes, piano

 

“First Movement” from Fantasy on Caribbean Rhythms

Edward Margetson, composer

The American String Quartet: David Johnson, 1st violin; Frank Sanford, 2nd violin; Felix Baer, viola; and Marion Combo, cello

 

“By the Sea”

Julia Perry, composer

Adele Addison, soprano and Margaret Bonds, piano

 

“The Negro Speaks of Rivers,” based on a poem by Langston Hughes

Margaret Bonds, composer

Adele Addison, soprano and Margaret Bonds, piano

 

On a scholarly level, Holt’s American Negro Artist Program adds another dimension to the way Samantha interprets the socio-sonic pluralities of Black artistry in the post-war era. Accessing Holt’s voice in the context of radio reifies connections between growing technologies and Black classical propagation at this time. In the absence of Holt’s full composition catalogue, hearing Holt amplify the work of her esteemed peers gives an enhanced perspective on her musical developments—from composer to curator, off the score and onto the airwaves.

On a personal level, however, it is upsetting to not have learned about Holt sooner and, as Chelsea elaborates, to not have a face like Holt’s to look up to during the loneliest moments of our education. Holt’s work validates Chelsea’s own pursuits, particularly in radio. Holt successfully created her own space in classical music, and did so unapologetically. She provided opportunities for Black musicians to be at the forefront and challenged a system that was not built for first-person Black narratives. And so, we take a leaf from her book, recognizing that the (re)sounding of her story is also the celebration of our own.

Listen to Holt and the American Negro Artist Program here.

Featured image:”Music stand (1)” by Flickr user Rachel Johnson, CC-BY-ND 2.0

Chelsea M. Daniel is a senior at the University of Texas, Austin, pursuing her Bachelor’s in Piano Performance. She is devoted to showcasing the stories and music of marginalized people and musicians. Daniel is the co-founder of the award-winning Exposure TV, which was created to highlight composers and musicians from underrepresented backgrounds. Daniel came across the American Negro Artist Program during her internship at WQXR-FM.

Samantha Ege is a scholar, pianist and educator. Her PhD (University of York) centres on the African-American composer Florence Price. Ege’s upcoming article on Price, Holt and the Chicago Black Renaissance women is called “Composing a Symphonist: Florence Price and the Hand of Black Women’s Fellowship” and appears in Volume 24 of Women and Music: A Journal of Gender and Culture. As a concert pianist and recording artist, Ege continues to amplify Black women composers in her repertoire.

REWIND! . . .If you liked this post, you may also dig:

My Music and My Message is Powerful: It Shouldn’t Be Florence Price or “Nothing”-Samantha Ege

Spaces of Sounds: The Peoples of the African Diaspora and Protest in the United States–Vanessa Valdes

Deejaying her Listening: Learning through Life Stories of Human Rights Violations– Emmanuelle Sonntag and Bronwen Low