Oxford University Innovation’s Mark Mann ponders how intellectual property rights management should be overhauled.
When I write an article it is usually going to be relentless optimism regarding the opportunities innovation can bring. I think the rest of my working life will be dominated by the possibilities and opportunities that come from an increasing ability we have to make sense of large volumes of data in all walks of life. But a few conversations over the last few weeks on a number of projects has motivated me to sound the alarm, that of intellectual property (IP) rights management.
I write this from the perspective of a university primarily. When researchers obtain datasets to find and develop new information, the rights that they have usually pertain to research only; researchers are allowed to use the data for free, publish results from it and teach people about the results. However, they are not usually allowed to take payment for using the results – they do not have commercial rights. Furthermore, the greatest insights are found by bringing multiple datasets together, each of which are likely to have subtle differences in what you can and cannot do with the data.
I think the purpose of a university is to develop new understanding and to communicate that new understanding as widely and broadly as possible. No one usually disagrees with this. However, universities in sharing that understanding do not want to get sued. The bigger universities therefore usually have a dedicated IP rights management team which checks, for instance, where the funding for a particular project came from, whether anything that comes out of that project can be used commercially and whether any third party has rights either to block, review or claim a share of any revenues. The Wellcome Trust, for instance, asks for a share in profits from projects it funds so it can plough the money back into new projects, much like universities do.
The system designed to facilitate this was primarily designed around patents because it was thought at the time that was where the most revenue was likely to come from. If a new invention is disclosed as a result of a project, the inventors and the university normally have a system for working out how to share any profits that are made which varies from university to university. This, for me, is the key rate-determining step in getting knowledge out of universities. The checking can be slow, there is often huge demand from small teams who responsibly check that the university will not get sued. Imagine how much Oxford would be liable for had it not performed these checks for the AstraZeneca vaccine. The system creaks and groans further when you get a software project. Software is often developed by teams of researchers and developers iteratively (or agilely) over many months. They will capture these changes using version control software such as GitHub or Subversion. Characterising who contributed what and when is very detailed and not every line of code is “worth” the same. Third-party libraries are used freely, each usually with a whole heap of different versions of licence. It is a nightmare to wade through and frankly, the system just does not work.
So with universities taking ages to get software rights sorted (in a world where you should be refreshing your codebase every three years), are they ready for data? The answer is absolutely not. Rather than burying you all in a sea of despondency, I am going to suggest a few solutions.
The disclosure process should be vastly simplified with much of the process automated, ideally converted to an app. Key features would be:
- Being readily available to select which grant they were working on. A university knows who is paying what to whom (or at least it should). Make a clickable interface which alerts researchers to what they can and cannot do (ideally before they have started work).
- Automating the process of working out the IP rights of software licences and datasets. The semantics are not difficult here – legalese though largely impenetrable to untrained humans will be fine for machine learning algorithms to detect. Researchers would simply have to input the URL for the licence (which in my experience are almost always online).
Checkers are then alerted to the key paragraphs they have to review which should hopefully significantly speed up the process.
However, more generally we have a bigger issue which is that the above only exposes the rights each party has before a negotiation. A more fundamental issue is that a negotiation is required at all. This causes a big issue as negotiations can take months or even years. If you have a healthcare intervention that works, people will die whilst parties argue about who gets the money. In my view, this is unacceptable.
I think we need to look to the music industry for models on how to manage who gets the royalties and when. Nobody using Spotify worries about whether Björn from Abba will get a couple of pennies when Dancing Queen comes up on their car playlist. As with record labels, blanket agreements are required with regular funders of university research. Yes, there will be some winners and losers, but over time a pattern should emerge and a fair payment determined. The same goes for datasets – set standard commercial terms and spread bet by making it available to all on the same terms.
I can see a lot of lawyers and accountants puffing their cheeks reading this, but the opportunity cost of delays caused by negotiation in the digital age are too great. The universities have a big role to play in all of this. Get ready.
– This article first appeared on LinkedIn. It has been edited for style and republished with permission from the author.