I'm hiring a product manager

So, I’m hiring a product manager. We are working on developing new green field products. We have a small, really fun team, and we are using a lot of lean tools to try to get to product-market fit with relativly low risk.

To apply submit a CV using our [hiring portal](https://uk.sagepub.com/en-gb/eur/sage-vacancies].

Here are some more details about the role:

What we are doing

SAGE is innovating around how we might support social science researchers engaging with big data and new technology. We are creating new services and products that can provide value to the research community while at the same offering new business opportunities for SAGE. This is an opportunity to work on stuff that matters in a fast-paced team focused on using lean principles to rapidly understand the needs of researchers, and to use those insights to develop products and services that will best serve their needs.

Our first product will roll out over the summer, and we are now looking to expand the team to allow us to move faster with the testing and creation of further products. You will play a pivotal role in this effort.

WHAT WILL YOU BE DOING?

How do you move fast in a large organisation? How do you make the right bets to make sure that what you are building is going to be useful to people? We have been experimenting with using a set of tools from lean product development to rapidly iterate on individual products while at the same time balance all of the ideas that crop up across the entire space of opportunities.

We have proven that we can get things done quickly at low risk and we now want to scale this approach out to allow us to look at a wider set of product opportunities.

You will be responsible for creating experiments to test our thinking around an immediate set of three possible products, and beyond that to help us prioritise and test a wide range of other product ideas. Weekly you should be constructing tests that allow us to make decisions around whether to continue to work on a product idea, as well as helping us to refine the ideas that we believe have real potential. You will be working with internal teams as well as partners from some of the world’s most prestigious universities.

We have had success using tools such as the lean product canvas and lean value tree, and supporting our analysis of risks and opportunities by applying pirate metrics to different business models and you will be expected to pick up these tools and be fluent in leading the use of them. We are open to any ideas that bring us rapid insight, and if you find better ways to validate or reject our hypothesis we will be interested in trying out any approach that can help us to get to scale.

The ultimate goal of the role is to help us get to a point where we can bring new products to market.

## RESPONSIBILITIES

· Lead on driving experiments around ideas in current development · Help the team make go / no go decisions on these ideas · Coordinate creation of testable prototypes · Run user tests · Be responsible for reporting on outcome of tests, both qualitative and quantitative · Help the team create a strategy for how to build MVPs for successful product ideas · Help the team prioritise further opportunities · Develop business cases for product ideas · Coordinate design sprints where appropriate · Coordinate with other SAGE teams such as marketing, IT and Design

## Skills

Successful Product Launch Experience – You will have worked on launching products to market from early ideation through to onboarding customers. You will understand how to translate needs of users into product features, and how to prioritise those features. You will have worked across teams on successful product launches and you will have made significant contributions towards the success of those product launches.

Lean Product Development Experience – At the heart of this role is doing lean experimentation. You must have experience of executing the spirit of lean or agile methodologies. We have adopted a specific set of tools, but underpinning all of these tools is a mindset of experimentation, and getting to data driven decisions. We are looking for someone who exemplifies this spirit and who can bring creativity to bear when faced with uncertainty.

Interpersonal Communication Skills – you will be communicating daily with people both inside and outside of SAGE. Being able to hold courteous, persuasive conversations (by email or phone) and respond quickly to queries are key requirements of the role. You should be able and willing to share your thoughts and experience with the team and feel confident to speak your mind and ask questions.

Prioritizing Workloads – you will be managing a varied workload involving many different tasks. You will need to prioritize effectively and allocate appropriate amounts of time to each task.

Research and Analysis Skills – you will be required to conduct desk research as well as expert and user interviews. Knowledge of social science research methods (both qualitative and quantitative) would be an advantage.

Curiosity and enthusiasm – you will be interested in working with social scientists, excited about the opportunities that digital publishing offers, enthused about research methods and eager to learn about SAGE and the markets we serve.

ic2s2 Thursday morning keynotes

ic2s2 Thursday morning keynotes

Daniel Romero - Examining the Effects of Exogenous Shocks on Social Networks and Collaborative Crowds

We are looking at the dynamics of social networks. The dynamics of networks are becoming increasingly well understood, but the question here is what happens to a network when there is a big external shock to the system?

One example that they looked at came from a hedge fund, the data included full IM communication amongst about 182 people, about 22M items. They also have the trading history of the hedge fund.

In this system an external shock will be a big change in the market. How do those changes change the structure of the network?

What they found was signals coming from the network turn out to be more predictive of future trading behaviour than just looking at the shocks that are happening in the market.

He looks at the subgraph of the hedge fund related to people who talk about a specific stock.

When there is a big change in the price more people talk about it, the networks get bigger.

Those networks also become more clustered and ties become stronger. Those networks also become more inward looking.

They also looked at shocks in wikipedia, and censorship from China on Chinese Wikipedia. Large censorship activity started happening with editors getting blocked from the site. This really started to pick up in 2005. They looked at the fraction of edits that were contributed by blocked editors to get a sense of the impact of the shock on the system. They looked at a number of measures on the network, pre and post shock event.

Many articles have no blocked contributors, so they can use those articles almost as a null hypothesis set.

The higher the level of the shock, the lower the activity that happens on those articles.

The result on centralisation is interesting. You expect a larger change with more shock, but at high sock levels the change in centralisation actually starts to decrease again. They don’t know for sure why this might be the case, but they think it might be that those articles that attract a large number of contributes have a high level of inflow of new editors, which compensates for editors being blocked from the site (this intuitively makes a lot of sense, but it would be nice to think of a way to get to that result via another route).

Milena Tsvetkova - Social Science Research with Games and Gamification

There are 1.8B gamers online. 41% are female. The average age is 35years old.

You can do gamified experiments.

If you create a gamified experiment you can bring the marginal cost of participation per person close to zero.

You can do new kinds of experiments at much larger scale than before.

Gamification has been used in the natural science for tagging images, for protein folding, for quantum computing and some others.

In the 1980’s Axelrod asked people to contribute strategies for the prisoners dilemma.

David Lazer’s platform volunteer science has some gasification elements, but this platform does not allow for person to person interactions.

This researcher is looking at segregation and also on inequality.

Schelling’s work shows that even if people are tolerant, you can end up with segregation. Even when people actively seek diversity you can end up with segregation, so the conclusion is that public policies might be futile.

Tsvetkova’s research created four real-time multi-player games. They took these games to 20 high school’s in Sweden where they demonstrated the game. The goal was for players to get as many points as possible.

The gameplay takes about two minutes to play.

They key thing here is that they did not pay participants, they got participation because they provided a game environment.

Their findings on this data are interesting and don’t fully support the work of Schelling, but perhaps more interesting in this talk is that they got this data with participants trying to improve their “utility” without being paid, and the framework of this kind of system for getting insight is actually promising.

On the other hand gasification can introduce unintended consequences.

A new project is starting on the emergence of inequality. How does strong inequality emerge when we know that skills are distributed fairly, and people like to drive towards equality.

They want to try to study feedback loops, and what happens when certain characteristics are visible to others (such as wealth or reputation).

Their game design is going to be modelled on tamagotchi and FarmVille. In this case with an online game, unlike with high school students in the classroom, you now have to think about how to recruit and retain participants to the game. (In fact this now starts to look like a classic challenge of any internet startup). They are collaborating with ScienceAtHome for building out this new experiment.

Ic2s2 Thursday Closing Keynotes

ic2s2 - Thursday, closing keynotes

Economic AI - Matt Taddy

Economic AI breaks complex systemic questions into simple sets of prediction tasks, that can be attacked with off the shelf ML techniques.

Today economists are being asked to answer basic economic questions but at a ridiculous scale (within companies like Microsoft or Amazon). There are just not enough economists out there to do all the work being asked of them.

Applied economics can be done via experimentation (like A/B tests on pricing), but it can be expensive and lengthly to do, and the results still need a high level of sophistication to interpret.

In applied econometrics the kinds of natural experiments that we might look for only occur in special settings.

The claim here is that machine learning can automate and accelerate tasks in each of the described workflows.

The state of econometrics won’t be improved, but the efficiency of the work will.

In the beer sales example the data is messy, and using all the data provides a sample that is too undifferentiated, so we use a ML technique to cluster beers based on their text description. If you use naive machine learning you run into problems. Selecting the controls and also predicting the response based on those controls is hard. The answer is to use Orthogonal Machine Learning.

The two things you want to know, prices and sales, are co-dependant on the of the variables. You can do prediction on these two things, and then take account of the regression between them. The two prediction problems are easy ML tasks.

The key thing here is a causal question is being broken down into earlier machine learning tasks.

Another area they are looking at is the area of instrumental variables.

He now talks about using deep nets. Can be really useful when we have images of products.

Deep learning is a good low development cost off the shelf ML. The next big gains in AI are coming from domain context. What we want to do is to break questions into the kinds of shapes that will fit well into the affordances of these cheap tools.

In the Q&A there is a great question from Matt Jackson asking what the real benefit is for the economist that the ML system is doing. Very simply, previously an economist would have spent a lot of time building a model, including or excluding features. Machine learning just ploughs through the data and fits models that work, taking away the need to do that manual labour.

Learning to be nice, social norms as a result of adjusting expectation - Maria Pereda

Maria describes the Dictator game, where one player can donate to another player. The receiving player has no agency. In these games when asking people about their expectations, they have expectations for reasonable levels of generosity.

Expectations are a key factor in how people play. They want to understand the role of expectations in outcomes in the dictator game.

There is a nice way to model the dynamics of how experiences and stimuli can modify actors in this game in terms of their expected behaviour. The developed a constraint to connect global aspirations with the stimuli that the players experience in the game.

So in the network of players we might expect behaviour to head towards some equilibrium status. The randomly play this game with about 1000 players.

They have compared the results of their simulations with the “expected” result from the literature. They also included imperfect decision making in the agents.

They find that for low numbers of items to donate the simulation matches the model, but at higher numbers of items to donate the simulations match actual experiments with people better than the analytic model suggests.

They can also introduce “inequity-averse” individuals (greedy people, who never donate more than half of the pie). Introducing these people makes the entire population a little bit more selfish.

You can also introduce “free-riders” Just one free rider can destroy the social norm, and it makes everyone in the population really selfish (damn those nasty free-riders).

They are engaged in an online pilot -IBSEN, conducting large-scale experiments with about 23k volunteers (If you are in an associated lab you can sign up here.

Matt Jackson - Folk Theories of Cyber-Social Systems: Understanding People’s Understanding

The work he did with Facebook looked at how information spread through Facebook. The result of the study was highly controversial and Matt received about 1000 pieces of hate mail. They violated user expectations of what their newsfeed was. He did a text analysis on the respondents that he received.

The reasons from users spread into a different class of responses:

  • queries about the newsfeed being manipulated
  • the newsfeed is important
  • emotions are important
  • big data is personal

The biggest theme that came through was about the sense of manipulation.

They also looked at the reaction in the media for up to six months after the event. Most of the media’s response were emotional.

Evan other academics who cited the studies tended to write is a personal way.

Clearly they violated people’s expectation, as was the issue around expectations and the personal nature of the work.

It got them thinking about what do people think about these systems, and this connected to his interest in folk theories. These theories are intuitive informal theories that people develop to explain systems that guide their behaviour.

He gives a great example about how we create a “folk theory” about gravity. Few people can explain gravity, and yet we understand how to operate in a gravitational field. We don’t need to go into depth about these systems, it’s not about being dumb, we just need to know enough about them to work with them.

Large social technical systems are complex, and yet people behave with them, and navigate their use, but we don’t really know what people think about these systems (like FB, online dating, Uber etc. etc.).

Some people have done this through interviews (previous work has found the people don’t really think about the existence of algorithms in these systems).

The work that Matt and his group took on started to look at the use of Metaphors.

They have developed a “discover-identity-specify” (DIS) method.

You need to discover the metaphors people use, then you reduce those into factors, and then look at the qualities of those factors.

They used a wiki-survey to find a way to enable users to generate metaphors in the system.

They found a bunch of data, and got Facebook down to four factors. They also see four factors for twitter.

You can find out how people feel about these factors. They found very similar results for the Facebook feed and for twitter.

Matt is arguing that there are four folk theories about social feeds, they are:

  • rational assistant that works for me
  • a transparent platform that is not manipulated
  • unwanted observer - is prioritising the company
  • corporate black box - not sure how it works, it’s all on FB

They looked at how strongly people hold different folk theories about the system, and you can compare those scores against platforms (twitter vs Facebook).

(What’s interesting to me about the data presented is that the scores are close to each other for different platforms even if one platform is slightly ahead on one measure than another).

Now they want to know:

  • how does your folk theory affect your behaviour?
  • what will update your folk theory?
  • how are these theories stable across types of algorithms and over time?

ic2s2 Wednesday morning keynotes

IC2s2 Wednesday Morning Keynotes

Cecliia Mascolo - human behaviour studies through the lens of mobile sensing and complex networks

She has been involved for many years in making and deploying sensors. The talk today will look at work they have been doing on foursquare data.

In London they have data of about 0.5M check-ins over nine months over about 40k users.

They also have social network data for these people, so they can connect their social network with the geo-spatial behaviour of these people. They have a two-layer network.

She talks about brokerage and structural holes. This is from Burt, Roland (1992), and they thought that this measure might provide interesting insight on their data.

There is a link between network openers and prosperity in theory, can they study this effect at scale?

A places social brokerage. s it’s ability to connect otherwise socially disconnected individuals. So is any given place connecting people who are not connected through their social links?

They can analyse the general signal of social brokerage based on the category of the kind of there place. You can also determine whether the place has more of a bonding role vs a bridging role.

They also then looked at deprivation studies (Hackney - where I live - sits in the pretty deprived part of the graph).

In 2011 Hackney was the most deprived area in London, but it also had a very high brokerage rank (how do we interpret this?). High brokerage indicates a high diversity in the area (which I can attest to about Hackney).

You can use this kind of tool for interesting types of urban discovery as well as looking at urban growth over time.

They can identify areas that have more new places, than expected in contrast to the growth of the rest of the city. You can see the effect of the olympics in London on the data.

You can also look at the effect of a new place opening on other places. The most competitive places are grocery stores,

The most cooperative places include Turkish Restaurants! (Along with gardens, monuments, plazas and tea rooms). The Turkish restaurant was a surprising result, it related to the Turkish community having a high Jensen coefficient.

You can also look at the temporal profile of places.

This was a great talk, that I felt was just scratching the surface of what can be determined from data like this.


Emily Falk - How ideas spread from Brain to Brain

So the key idea around this talk is that many key ideas spread through social media. Not all ideas get shared equally, what makes someone want to share an idea?

Some examples, it might be because it makes us look good, we think others might benefit from the information, we think others might be interested.

If you ask someone after the fact on why they shared, their self-reporting is unreliable.

They use fMRI to do brain scans while people are sharing content. They use oxygen flow as a proxy for brain activity.

In prior work they were able to predict future behaviour based on brain activity in selective areas of the brain when people are exposed to anti-smoking messages.

They observe that systems related to self-related thought and social cognition are activated during the process of evaluating whether to share an idea or not.

They ran an experiment looking at whether a chain of people would share a decision to promote a TV show (they created TV show ideas for the purpose of the experiment).

They also looked at participant behaviour around the decision to share NYT articles. They find evidence supporting the claim that the brain systems suggested earlier are really activated during the sharing process.

Looking at 40 participants, they could identify a measure of how much brain activation happened across those participants, and that is correlated to wide-spread sharing of those articles by the broader NYT community.

Not everyone’s brains are equally predictive. There is a fair amount of heterogeneity. Some people have a strong correlation with global sharing, what’s going on with these people’s brains?

It turns out that people who were regular NYT readers, their brains were activated by a lot of the NYT articles, and as a result were not so predictive of general sharing.

This is an amazing, and somewhat scary talk.

IC2s2 Wikipedia track, Wednesday

Ic2s2 wikipedia track

Studying Content Survival, Authorship & Controversy – By Tracing Every Word Change on Wikipedia - Fabian Flöck, Kenan Erdogan and Maribel Acosta

Arriving late to this talk.

[TokTrack]([TokTrack: A Complete Token Provenance and Change Tracking Dataset for the English Wikipedia Zenodo](https://zenodo.org/record/345571#.WWXuT8aQ3dQ)) - data is available on Zenodo, 13.5B tokens. There is a wikiwho API. This is an accurate dataset of all provennced changes in English wikipedia.

They can look at survival rates for edits. If an edit survives for 48 hours, then it’s probably safe and will live on in wikipedia.

They can also look at how much people agree or disagree with the content that they are editing.

You can do an n-gram like analysis on text.

You can look at the most conflicted parts of a wikipedia page.

Basically, this is an amazing dataset.

Even Good Bots Fight - Milena Tsvetkova, Ruth Garcia Gavilanes, Luciano Floridi and Taha Yasseri

There are a lot of bots. In wikipedia there are a lot of bots that assist in the process of wikipedia, and

They created a typology of internet bots. Wikipedia are built out of python code, on top of the wikipedia API.

There are a small number of bots, but they generate a large number of edits (often they are doing tiny simple things, but still in volume they make up a large number of edits).

An example of what one of the bots do is to revert previous edits (they are trying to eliminate vandalism). They found that even bot reverts get reverted, but if the bot is a “good” bot, why are its actions getting reverted? Bot logic is basically creating contentions, and bots are reverting each other, in an endless cycle. People are basically not paying attention, so this activity just continues at a low level, which leads to lots of interesting questions around goals and efficiently of the system.

In thinking about the ecosystem of bots, we should expect bot—bot interactions to become complex.

Why We Read Wikipedia - Florian Lemmerich, Philipp Singer, Robert West, Leila Zia, Ellery Wulczyn, Markus Strohmaier and Jure Leskovec

They wanted to know why people read wikipedia, and they took a taxonomic and survey approach to understand why. They survey ran on wikipedia.

They generated a three class taxonomy around motivation, information need and prior knowledge of the topic pages.

They were able to generate 30k responses to their survey. This is subject to non-response bias.

They did some bias correction using inverse propensity score weighting.

They also looked at wikipedia log data, and extracted features about the articles that were being read.

We see some unsurprising results e.g. during work hours people are using wikipedia for work or study related reasons.

They correlated information from the survey with traces they observed across logs and article features.

Topics of articles cluster around the kinds of motivations that people have reported. When bored they look at sport articles, when they are studying they are looking at technical pages, e.g. physics or mathematics.

They are now extending the survey to other languages other than English.

They hope that they might be able to allow editors to better understand the kind of motivations that people might have for the kind of articles that they are writing. This could inform writing style or interface design.

Modeling collective attention on promoted content in Wikipedia - Marijn ten Thij, David Laniado, Andreas Kaltenbrunner and Yana Volkovich

Looking at “Today’s featured article” in wikipedia (today’s is about the Battle of Prokhorovka!) .

These articles get more page views during the day that they are featured!

In redistributed time page views have an exponential decay, and you can model with a Poisson distribution. The first hour of page views will give you a good estimator for future page views.

Linguistic neighbourhoods: Explaining cultural borders on Wikipedia through multilingual co-editing activity - Anna Samoilenko, Fariba Karimi, Daniel Edler, Jérôme Kunegis and Markus Strohmaier

Is showing how language concepts are used in argument, and how the theory of cognitive balance and dissonance is used to analyse political speech (apparently for some positions bicycles are in the bad category of things).

The empirical evidence for this balance theory has been mixed.

They think they can test some of this theory using wikipedia data.

Each article can be considered as a project with a network of people who contribute to, or produce the article. These contributions can be negative or positive.

What they have found is that the articles on climate change and on racism are different.

They looked at the bi-polarity of all articles that were produced with sufficient user interactions.

They found that polarised teams product low-quality output, but need to do more work to uncover the core reasons for the existence of that polarity.

Linguistic neighbourhoods: Explaining cultural borders on Wikipedia through multilingual co-editing activity- Anna Samoilenko, Fariba Karimi, Daniel Edler, Jérôme Kunegis and Markus Strohmaier

I like this talk, looking at concepts, as the concept gets narrowed, the number of languages that cover this in wikipedia narrow down. 170 languages describe beer, bit only 4 languages describe Kölsch beer!

At the moment this kind of observation is anecdotal, but they have generated a systematic way to investigate these kinds of relationships.

They are looking a the 110 larges language editions of wikipedia, with about 3M concepts (the articles).

They have list of languages per concept. They create a bipartite network of concept co-editing and then create a network of significant links.

This creates the network of shared interest for concepts across languages.

You can find which languages edit similar concepts. After clustering you find language families. Some of the closers are clearly formed by geographic clustering.

There are also clusters forming around the linga franca of a multilingual region, e.g. Hindi and Sanskrit, Sundanese and Malay.

They can quantify the contribution of different hypotheses to the overall effect of language clustering.

It would be super interesting to look at whether there are temporal shifts in editing across languages.

The Evolution and Consequences of Peer Producing Wikipedia’s Rules- Brian Keegan

What are the governance rules on wikipedia, and how do they way they are constituted contribute to it’s resiliency, and how have they changed over time?

The rules are just pages, and can be edited at any time.

(There is a “no angry mastodons” rule !?) .

Rule making activity mirrors historical wikipedia activity.

Early rules are still very active sites of editing.

New rule editors between 2006 and 2008 made the most edits, but make few edits now.

People who participated in rule making participate in more namespaces and make smaller edits than before having been involved in rule making.

IC2s2 Tuesday Afternoon Keynotes

IC2s2 Tuesday afternoon


Dashun Wang - Predictive Signals Behind Success

Dashun starts by talking about weather predictions, natural phenomena can be observed, modelled and predicted. Can success be measured modelled and predicted? Obviously Dashun thinks so, and we are gong to be looking at the result of a number of studies in this talk.

Science of Science

Aim is to build on top of the work of others, but add to it the massive data that we now have access to, along with the tools that we have developed.

First question, is there a mechanistic model that can predict future citations of papers? There are three factors:

* 	preferential attachment -> fitness
* ageing -> immediacy 
* novelty -> longevity 

You get to a formula that shows the future citation for any papers. What changes on a paper if the “scaled time“ for that paper.

Given this function you can give high resolution future prediction for that paper.

Can we now look at patterns for careers as well? We can look at the best paper for a person’s career. His work shows that your “best” paper can arrive at any time in your career. This is based on looking at the random impact rule.

There is a Matthew effect. Does the location of your biggest work affect where your second biggest work is? If we know where your biggest work is then your next biggest is gong to be close to your biggest. This is sometimes called the “Hot Hand” period. This pattern can appear randomly in your career. Most people have one of these, and it usually lasts 4 to 5 years.

Your google scholar profile is highly affected by whether or not a scholar has experienced this hot and period.

diffusion and adoption of technology

He is talking about how many things we talk about as adoption of technology is substitution. Not everything is, but many things are.

What do systems driven by substitution look like?

Many of these systems follow power law growth.

The model for future growth is also captured by three parameters, which look a lot like the parameters from citation analysis.


Ulrik Brandes - The Space of All Centralities

This is going to be a methodological talk. There are hundreds of centrality measures. He goes on to describe what a centrality index is and shows that we can write a number of centralises to share the small nature of the relationship between objects in our network.

	c(i) = \Sum_t \tau(s,t)

You can think of this relation as being expressed in a path algebra. Different operators on the paths will produce different values.

If the property is to be a centrality, then any property must do worse as we move away from the “central” position, and we say that this means we say that neighbourhood-inclusion is respected.

We can look at the position of the node in the network, and compare the measure based on the location of the node in the network.

We can map social spaces into network spaces by using variables in the social space as positions in the network space.

When thinking of the space of all centralities we can think about ranking only. All rankings are defined by the number of items in the network, and there are about k! Items in a k sized network (just the permutation factor). This is quite a lot.

Remember though, we want to support neighbourhood-inclusion, otherwise we get a ranking, but not one that perseveres the property of a centrality. It turns out that this will then depend on the network structure, but it will probably be lower than just the permutation of nodes.

Remember, we are trying to understand how to get to centrality measures, and we have looked at some of the constraints of networks on what it means to be a centrality measure.

Now we look at some examples.

Looking at the Medici network. By looking at the neighbourhood-inclusion requirement we get an initial baseline for ordering of members on this network. It doesn’t provide full ranking, but all other centralities must at least preserver the ordering we get from this analysis.

This provides an interesting new way to look at centralities.

This was quite a technical talk, and I appreciated the cleanest of the analysis, but without a deeper dive on some networks to analyse I’m not going to be able to fully grok the content.

IC2s2 - Tuesday Morning

IC2s2 - Tuesday Morning


Open Remarks, President of GESIS

  • main note is his call for the potential creation of a new disciple, computational social science

opening session, chaired by Duncan Watts

Ciro Cattuto - ISI foundation, director of the data science lab

  • is behind sociopattersn.org
  • have many deployments over the last 10 years
  • they make the data open at sociopatterns.org/datasets
  • really cool work on interaction patterns in schools and hospitals
    • can see the gender separation as age going on,
  • also really interesting network study on depression, gender and social network position
  • I like the dimensional reduction techniques on looking at time resolved graphs. One of the keys here is the use of visualisation
  • also nice example of using AI / Machine learning to clean up the dataset
  • interesting point about how sensor cost is coming down, but phone cost is not, sensors will soon be disposable, next phase is to make sensors that are self-sufficient for collection of data over weeks

I remover seeing one of the first roll-outs if this kind of work about nine years ago, so great to see the work that has evolved from that, and the tensor analysis on time sliced networks to extract structure is really exciting.


Agnes Horvat - hidden signals of collective intelligence in crowdfunding

  • “wisdom of the crowd” has been confirmed
  • is looking at the framework of crowdfunding site
    • is interested in who gets finding and who pays back
    • also looking at outcomes

A really interesting aspect of this talk is that traditional social measures (such as credit-worthiness) can merge really nicely with the data that is coming at scale from the online crowdfunding platforms. Agnes is also going to followup with some survey methods to find out more qualitative data about the people who are using this work.


Morning session - User Demographics and Privacy

We have six rapid fire presentations in this session. I was talking to someone earlier today who has access to a large volume of phone record data. He was describing to me the steps they take to ensure user privacy, and also how they get consent. What was interesting to me in that conversation was how very much the norms and decisions being made around this topic are being done on a case-by-case basis.

I’m constantly thinking now about questions like:

Is the lack of a global framework for some of these questions an issue?

The researchers I meet with seem trustworthy, how are they socialising the expected behaviours of how to deal with this kind of data, and does that affect their behaviours?

Are EU-like regulations going to lead us to an envorinemat where we are unable to help patients as much as we might (thinking of DeepMind and some calls I’ve heard about trying to understand how to affect social change to better health outcomes in developing nations)

Are there moral positions that are self-evident around our idol of data privacy in a world where that privacy almost certainly does not exist as we think that it does?

Many many questions, let’s see what the speakers in this session have to say?


Yang-chih Fu et. al. — Inducing Egocentric Networks with Privacy Settings:

This talk is looking at how privacy settings within social networks tend to impose structures on the data that you can get from those networks that might not actually exist in the ground truth of social interactions. I guess that this is quite a nice example of how the latent structure of the online system imposes structure in the data, I’m not sure how often we think about this kind of thing.

They compare surveys with Facebook data (another example of “the survey’s not dead yet” theme that I am starting to see at this conference).


Kalimeri at al - What do our digital records reveal about us?

  • They are inferring the moral construct or people from a wide range of online data.
  • They have recruited 7500 people in the US, and these people agreed to provide access to a number of data sources, along with providing a lot of demographic data as well as the moral foundations questionnaire.
  • The data was collected over one month.
  • Medium in scale, but demographically representative

So, what do they find? At a very high level you can identify Binders and Individualists based on the kind of sites that they visit. To be fair, the sites that are being shown are not intuitively surprising (Fox News — Binders, Google — Individualists), but it is clear that they have fairly fine grained data on behaviours.


Bennati et al - New Techniques for privacy preserving record linkage at large-scale social science data sets


Schnell at al - New Techniques for privacy preserving record linkage at large-scale social science data sets

  • EU regulation recommended encrypting identifiers used for linking
  • GDPR will also prevent linkage through cleartext data

A key point of this talk is that using encryption kills your ability to have any tolerance for errors in your data. The protocol being presented here is to use a two party protocol. This has been done with patient data in the past.

One suggestion is to use encrypted statistical linkage keys. You pull a random set of a substring from a larger string. This is somewhat error tolerant, but you loose data. It’s messy.

This group is working on a tool built on top of bloom filters.

They create a bi-vector representation of the original source. You can then get to n-gram similarity between two strings. This kind of encryption does persevere similarity, so is open to attack. A method of vector folding is presented as a way to throw away some of the signal to Harden against attack. They use 1-time and 2-time folding in their experiment.

Their metrics come out pretty well using this technique.

So, you can encrypt the data, while being error tolerant, but the folding leads to a high level of false-positives.

So, the idea has potential, but the talk leaves the end point a bit unproven, but It is very encouraging.

The key think on the bloom filter is that the bloom filter needs to be hashed, and that hashing has to be shared by the two parties, so this techniques is very useful for private sharing between two agreeing parties, but not useful for making encrypted data public for re-use. Still, this is an advance.

Here is a bit about (Bloom Filters)[Bloom filter - Wikipedia]


Bennati et al - Incentivised data sharing via group-level privacy-preservation

So, how do you convince people to give up their data so that you can create better services?

Sensors provide data to an aggregator. Those sensors can apply functions to the outbound messages. If the sensors increase privacy, the quality of their data goes down.

They are looking at a bottom-up approach based on network topology.

There are nodes of aggregators that roll up to a main aggregator. Each local aggregator is collecting from a collection of sensors.

They ran simulations to look at how different parameters of the graph led to different characteristics of privacy.

They found a way to saturate privacy while not giving up on accuracy, but they have not yet checked whether people would actually adopt this kind of system to give up more of their data. This seems to me to be really one of the critical points of work like this.


Dennis Feehan and Curtis Cobb - How many people have access to the internet? (The title is longer, check out the program!)

Ok, so standing on the other side of “surveys are not dead yet” meme, there is a big graph showing that response rates to survey data are declining rapidly.

So the idea is to ask people in an inexpensive way how other people behave as a way of getting to scale, e.g. ask people on Facebook how many of their friends use the internet.

They actually collected information on Facebook! They took a random sample of people on Facebook.

There are lots of design decisions going on to make sure that respondents don’t get hit by survey fatigue. There are lots of other things going on here to help check for consistency in the reports too.

They got about 1500 people in each of six different countries.

Aside from he headline title of this talk, looking at the methodology of doing this kind or reporting on Facebook is fascinating.

They find that just under 50% of people are online in India, and in the US and UK the numbers head towards the high 80% rates.


Kashyap et al Ultrasound technology and missing women in India

They are looking for sex ratio at birth distortions. People do not want to openly talk about sex-preferential abortions. There are a lot of data issues, it’s hard to capture the footprints of the data.

They want to think about how Google might be a source of information leading to the decision path towards sex selective abortions. They looked at searches for access to ultrasound.

They tried to train data on the sex ratio\s they have, correlated with search for Ultrasound to predict sex-ratio stats in the future.

This is a really interesting talk, showing how these kinds of techniques can lead to real insight into a deep societal problem.

IC2s2 2017, conference preview

I’m about to head to Köln for the International Conference on Computational Social Science (IC2S2) and I’m pretty excited for a whole bunch of reasons.

I’ve been fascinated by network science, and the potential application of techniques from that science to the rest of research, for quite a while, but generally this has been an interest that I have entertained from a distance.

I had the pleasure of attending an early instance of the NetSci conference series back in 2008, and even contributed to a working paper looking at the implications of the intersection between social data and network algorithms (Mining for Social Serendipity), however the academic track has not been my track at all, and with one thing or another this was a community that my career moved me away from for some time.

Over the last year that’s changed quite a bit, and now with my role for SAGE the very heart of what I am doing is trying to learn about this new emerging field of computational social science, and more that just learn about it, actually try to build tools to support it. It’s an awesome opportunity, because the intersection of tooling for data at scale to social data and data about social phenomena is transformational for the social sciences.

I’m really looking forward to reconnecting to some of the network science community at IC2S2 and getting up to speed with how that relates to computational social science, as well as learning a load about where this aspect of the research field is right now, and what they see as their key challenges.

A few months ago I was interviewed by PLOS about my new role and this field. I’m still learning, but the interview is a good reflection of some of what remain my core beliefs about the effects of data on the social sciences.

Since that interview we have started to roll out the first product that has come out of thinking about researcher needs in this space. We think that there remains a lot that can be done to build up technical capacity in the social sciences, and we are designing online courses with exactly that in mind. (SAGE Campus)[https://campus.sagepub.com] will support researchers through learning support and really well tailored content (I’m particularly excited about the experiments that we are doing around using Jupyterhub) for some of the courses.

SAGE Campus).

I’m going to be attending the conference with my awesome colleague Katie Metzler. We are working on a bunch of other product ideas too, and one question we have for you is:

What question would you most have liked help with as you were getting started with computational social science? 

We want to build up a sense of what kinds of questions those are, and more interestingly find out if there are people who would love to help answer those questions. If you are at the conference and you see us please stop us and let us know, and if you want to ping us on twitter please go ahead!

Finally, Köln is a great city, and my wife and her family are from there, so it’s going to be a pleasure to engage with some amazing conversations in a city that I know really well. I’ll be at the conference from tomorrow through to the end of the week, if you see me, please stop me and say hi!

Futurepub10

This week I attended futurepub10, I love these events, I’ve been to a bunch, and the format of short talks, and lots of time to catchup with people is just great.

# A new Cartography of Collaboration - Daniel Hook, CEO Digital Science (work with Ian Calvert).

Digital science have produced a report on collaboration, and this talk was covering one of chapters from that.

I was interested to see what the key takeaways are that you can describe in a five minute talk. This talk looked at what could be inferred around collaboration by looking at co-authors actually using the Overleaf writing tool. It’s clear that there is an increasing amount of information available, and it’s also clear that if you have a collaborative authoring tool you are going to get information that was not previously available by just looking at the publication record.

Daniel confirmed they can look at the likely journals for submission, based on the article templates, how much effort in time and content that each author is providing to the collaboration, how long it takes to go from initial draft to completed manuscript, which manuscripts end up not being completed. There is a real treasure trove of information here. (I wonder if you can call the documents that don’t get completed the dark collaboration graph).

In addition to these pieces of metadata there are the more standard ones, institute, country, subject matter.

In spite of all of the interesting real-time and fine grained data that they have, for the first pass info they looked at the country - country relations. A quick eyeballing shows that the US does not collaborate across country boundaries as much as the EU does. The US is highly collaborative within the US.

Looking at the country to country collaboration stats for countries in the EU I’d love to see what that looks like scaled per researcher rather than weighted by researchers per country, are there any countries that are punching above their weight per capita?

In the US when you look at the State to State relations California represents a superstate in terms of collaboration. South Carolina is the least collaborative!!

The measures of centrality in the report is based on document numbers related to collaborations.

Question Time!

The data that generates the report is updated in real time, but it seems like they don’t track it in real time yet. (It seems to me that this would really come down to a cost benefit analysis, until you have a key set of things that you want to know about this data you probably don’t need to look at real time updates.). Daniel mentions that they might be able to begin to look at the characteristic time scale to complete a collaboration within different disciplines.

In terms of surprise there was the expectation in the US that collaboration would be more regional than they saw (my guess is that a lot of the national level collaboration is determined by centres of excellence for different research areas, a lot driven by Ivy League).

Someone asks if these maps can be broken out by subject area. It seems that it’s probable that they can get this data, but the fields will be biased around the core fields that are using by Overleaf.

This leads to an interesting question, how many users within a discipline do you need to get to get representative coverage for a field (when I was at Mendeley I recall we were excited to find that the number might be in the single digit percentages, but I can’t recall if that still holds any more, nor why it might.).

Someone asks about the collaboration quality of individual authors. Daniel mentions that this is a tricky question, owing to user privacy. They were clear that they had to create a report the didn’t expose any personally identifiable information.

### Comment

I think that they are sitting on a really interesting source of information, and for any organisation to have information at this level, especially with the promise of real time updates, that’s quite exciting, however I’m not convinced that there is much extra information here than you would get by just looking at the collaboration graphs based on the published literature. This is what I’d love to see, can you evidence that the information you get from looking at real time authoring is substantively different than what you would get by mining the open literature? Doing this kind of real time analysis is probably only going to happen if Overleaf see a direct need to understand their user base in that way, and doing that is always going to need to be traded off against other development opportunities. Perhaps if they can find a way to cleanly anonymise some of this info, they could put it into the public domain and allow other researchers to have a shot at finding interesting trends?

The other papers in the report also look interesting and I’m looking forward to reading through them. The network visualisations are stunning and I’m guessing that they used gephi to derive them.

# Open Engagement and Quality Incentives in Peer Review, Janne Tuomas Seppänen, founder of Peerage of Science. @JanneSeppanen

Peerage of science provides a platform to allow researchers to get feedback on their manuscripts from others (reviewing) before submission, and allows them to get feedback on how useful their reviews are to others. A number of journals participate to allow easy submission of a manuscript along with review for consideration for publication.

Janne is emphasising that the quality of the peer review that is generated in his system is high. These reviews are also peer evaluated, on a section by section base.

Reviewers need to provide feedback to each other. This is a new element to the system, and according to Janne the introduction of this new section in their system has not negatively affected the time to complete the review by any significant factor.

75% of manuscripts submitted to their system end up eventually published. 32% are published directly in the journals that are part of the system. 27% are exported to non-participating journals.

### Questions

The reason why people take part in reviewing is that they can get a profile on how good their reviews are from their colleagues, building up their reviewing profile.

Is there any evidence that the reviews actually improve the paper? The process always involves revisions on the paper, but there is no suggestion that there is direct evidence that this improves the paper.

### Comment

Really, anything that helps to improve the nature of peer review has to be welcomed. I remember when this service first launched, and I was skeptical back then, but they are still going, and that’s great. In the talk I didn’t catch how much volume they are processing. I’m keen to see many experiments like this one come to fruition.

Discover what’s been missing, Vicky Hampshire, Yenow

Yenow uses machine learning to extract concepts from a corpus, and then provides a nifty interface to show people the correlation between concepts. These correlations are presented as a concept graph, and the suggestion is that this is a nice way to explore a space. Specific snippets of content are returned to the searcher, so this can be used as a literature review tool.

I had the pleasure of spending an hour last week at their headquarters in Redwood California having a look at the system in detail, and I’ll throw in some general thoughts at the bottom of this section. It was nice to see it all presented in a five minute pitch too. They do no human curating of the content.

They incorporated in 2014, is now based in California, but the technology was created in Kings in London. As I understand it the core technology was originally used in the drug discovery realm and one of their early advisors Mike Keller had a role in alerting them to the potential for this technology in the academic search space.

The service is available through institutional subscription and it’s been deployed at a number of institutions such as Berkeley, Stanford and the state library of Bavaria (where you can try it out for yourself.)

To date they have indexed 100M items of text and they have extracted about 30M concepts.

### Questions

Are they looking at institutions and authors? These are things that are on their roadmap, but they have other languages higher up in their priorities. They system won’t do translation, but they are looking for cross-language concept identification. They are interested in using the technology to identify images and videos.

They do capture search queries, and they have a real time dashboard for their customers to see what searchers are being made. They also make this available for publishing partners. This information is not yet available to researchers who are searching.

They are also working on auto-tagging content with concepts, and there is a product in development for publishers to help them auto-categorise their corpus.

They are asked what graph database they are using. They are using DynamoDB and elasticsearch, but Vicky mentioned that the underlying infrastructure is mostly off the shelf, and the key things are the algorithms that they are applying.

At the moment there is no API, the interface is only available to subscribing institutions. The publisher system that they are developing is planned to have an API.

### Comment

There is a lot to unpack here. The scholarly kitchen recently had a nice overview of services that are assembling all of the scholarly content, and I think there is something here of great importance for the future of the industry, but what that is is not totally clear to me yet.

I’m aware of conversations that have been going on for some years now about wanting to see the proof of the value of open access through the development of great tools on top of open content, and as we get more and more open access content the collection of all of that content into one location for further analysis should become easier and easier, however yenow, along with other services like meta and google scholar, have been building out by working on access agreements with publishers. It’s clear that the creation of tools built on top of everything is not dependent on all of the content being open, it’s dependent on the service you are providing being not perceived as threatening to the business model of publishers.

That puts limits on the nature of the services that we can construct from this strategy of content partnerships. It’s also the case that for every organisation that wants to try to create a service like this, they have to go through the process of setting up agreements individually, and this probably creates a barrier to innovation.

Up until now many of the kinds of services that have been built in this way have been discovery or search services, and I think publishers are quite comfortable with that approach, but as we start to integrate machine learning, and increase the sophistication of what can be accomplished on top of the literature, will that have the potential to erode the perceived value of publisher as a destination? Will that be a driver to accelerate the unbundling of the services that publishers provide. In the current world I may use an intermediate search service to find the content that may interest me, and then engage with that content at the publisher site. In a near future world if I create a natural language interface into the concept map, perhaps I’ll just ask the search engine for my answer directly. Indeed I may ask the search engine to tell me what I ought to be asking for. Owing to the fact that I don’t have full overview of the literature I’m not in a position to know what to ask for myself, so I’ll rely on being told. In those scenarios we continue to disrupt the already tenuous relationship between reader and publisher.

There are some other interesting things to think about too. How many different AI representations of the literature should be hope for? Would one be just too black boxed to be reliable? How may we determine reproducibility of search results? how can we ensure representation of correlations that are not just defined by the implicit biases of the algorithm? should we give the reader algorithmic choice? Should there be algorithmic accountability? Will query results be dependent on the order in which the AI reads the literature? Many many many interesting questions.

The move to do this without any human curation is a bold one. Other people in this space hold the opinion that this approach currently has natural limits, but it’s clear that the Yenow folk don’t see it that way. I don’t know how to test that, but maybe as searches on the platform become more focussed, that’s the moment where those differences could come to light.

I do have some comments on the product itself. I spent a little time today using the demo site available from the state library of Bavaria. It strikes me that I would quite like to be able to choose my own relevance criteria so that I can have a more exploratory relationship with the results. I did find a few interesting connections through querying against some topics that I was recently interested in, but I had the itch to want to be able to peel back the algorithm to try to understand how the concepts were generated. It’s possible that this kind of search angst was something that I experience years ago with keyword search, but that years of practice have beaten the inquisitiveness out of me, but for now that is definitely something that I noticed while using this concept map, almost a desire to know what lies in the spaces between the connections.

At the moment they are looking to sell a subscription into libraries. It’s almost certain that this won’t totally replace current search interfaces (that sentence might come back to haunt me!). The challenge they face in this space is that they are Yet Another Discovery Interface, and people using these tools probably don’t invest a huge amount of time learning their intricacies. On the other hand the subscription model can be monetized immediately, and you don’t have to compete with Google head to head.

On a minor note looking at their interface there is an option to sign in, but It’s not clear to me why I should. I imagine that it might save my searches, that it might provide the opportunity for me to subscribe to some kind of updating service, but I just can’t tell from the sign up page.

CrossRef Event Data - Joe Wass - @JoeWass

By this stage in the evening the heat was rising in the room, and the jet lag was beginning to kick in, so my notes start to thin out a lot. Joe presented some updates on the CrossRef event data service. It was great to see it live, and I’d love to see it being incorporated into things like altmetric. Perhaps they need a bounty for encouraging people to build some apps on top of this data store?

At the moment they are generating about 10k events per day. They have about 0.5M events in total.

They provide the data as CC0, and for every event in the data store they give a full audit trail

Musicians and Scientists - Eva Amson - @easternblot

Eva gave a beautiful little talk about the relationship between scientists and musicians, and that there are a disproportionally high number of scientists who play instruments than in the general population. She has been collecting stories for a number of years now and the overlap between these two activities is striking. You can read more about the project on her site and you can catch Eva playing with http://www.londoneuphonia.com on Saturday at St Paul’s Church Knightsbridge.

Three posts about product development

lean value tree

I’m catching up on some reading at the moment. Trying to make headway on some other work while jet lagged is proving a challenge. Anyway, here are a couple of nice posts about product development that popped up in my feed (hat tip to Mind the Product Weekly Newsletter.

## What do people do in the spaces in between?

When thinking about what people do with your product, also think about what they don’t do, and how to help them get to where they are going.

The takeaway from this post is that by mapping out these interstitial moments you can get to a better understanding of your users needs, and better map the requirements of what you need to build.

## We have been getting MVP wrong all this time, the point is to validate, not to delight for it’s own sake.

Forget “MVP”, focus on testing your biggest assumptions

The key point in this post is that when deciding what to ship, use each iteration as an opportunity to test your riskiest assumptions, and understand what you expect to learn with each release. If you don’t know what those assumptions are, or what you are going to learn, why are you shipping a feature? I imagine that this post is mostly directed towards products that are still exploring the market-fit space, however even established products live within spaces that are evolving so some of this thinking carries over too.

It reminds me of the Popperian view that you can’t prove hypothesis, but you can reject them, so each experiment to be most valuable should be constructed to try to reject the most critical hypothesis.

I think there is at least one counter argument to the main point in this post, but you know things are complex, so that’s OK. If you are in a space where you understand your users well, and you have considerable experience to hand, it is probably OK to just do what you know to be right in terms of benefitting the user.

Burn the roadmaps!!

Throw out the product roadmap, usher in the validation roadmap!.

This post was very welcome reading for me as I have a terrible relationship with product roadmaps, I just think that in a fast moving environment you don’t know what you are going to be doing in 12 months, and god forbid if you are tied down already to what you are going to be doing in 18 months, then you are probably not exploring a new space. Of course when you get to scale, and when you get to work on projects at scale, those kinds of timelines do in fact make sense, but I still like the idea of flipping the roadmap into one that is constructed around confirming/testing our understanding of the world in contrast to constructing how we want to roll our our features.

Lean value tree, and constant experimentation

The image at the top of this post is a representation of a tool called the lean value tree (see slide 30 from this deck. We have been using it a bit in the last two months at my current role, and I’m finding a lot of value in it. One of the things that ties all three of the posts that I have linked here together is the idea of experimentation. Understand your missing assumptions, test rigorously, be led in decision making about what you can learn. Something like the lean value tree can sit above these imperatives and help you make decisions around which experiments to spin up, and how to balance opportunities. Having worked it pretty hard in the past few weeks I can see that it has a lot of value, but it still does not beat open conversation in an open team.