Zooming Philly Skyline by Midjourney, thanks to Grant Cooper from Future Founders

Product Update 08-05-2022

If you want to get emails when we post new product updates, you can sign up here!

Why Tweetscape

In the last couple weeks, our team took the excitement from our last round of interviews, which included 2 people willing to be our first paying customers, and channeled it into technical exploration.

This has led to a lot of interesting technical conversations about possibilities, but also led us back to a divergent state.

In the midst of that divergence, I wanted to spend some time focusing on why we are building Tweetscape. What problem do all of these technical ideas need to solve?

To help ground myself in the why, I went back to some of our interviews to pulled some highlighted quotes

These quotes point us to our why.

Our interviewees have a deep respect for the twitter. They know that smart people post cool stuff there. At the same time, a lot of other people post a lot of other things.

There is a focus problem and a curation problem.

Tweetscape exists to help solve the focus and curation problems. We want to help productive people take advantage of the learning and social potential of twitter, without the anxiety of feeling like they need to be on the app all the time to not miss important things.

Tweetscape Mission

Give people more control over the information that codes them.

Tweetscape exists to give you more control over your own media encoding.

You can leave it up to the whims of the algorithm, or you can craft an environment based on what you really want to learn about (with a little algorithmic serendipity thrown in there for good measure, of course).

How does Tweetscape Fit

One thing we don’t want to forget is what twitter does well, which is giving users access to the flood of info for large, live events.

I saw twitter used to access the real time flood on the Fourth of July this year, as I watched the fireworks over the parkway in Philadelphia while police officers were fired at. Although this was a sad, horrifying, dystopian moment, it was a clear example of what twitter is best at.

While I watched the fireworks from my friend’s apartment, crowds of people suddenly ran in many different directions - it was chaos. We weren’t sure what was happening, but minutes later there were tweets sent out informing us of an active shooter, with updates on the situation coming every few minutes. I hardly ever use twitter in this way. I was impressed at how well my friends were able to find so much real time information about the situation.

A much lighter example of this live conversation is how people who watch shows like The Bachelor use twitter. On commercial breaks they fire up the app and laugh as they go through a timeline of live commentary on the last 15 minutes of the episode. It’s pretty incredible to watch this live conversation happen between thousands of people.

However, the emphasis on right now makes it hard to use twitter in a more focused mode, like our interviewees want to. Tweetscape fits as the complementary solution to the live info flood that current twitter provides. We want to help users who don’t want to be on all the time not miss the information that matters to them.

With that said, Tweetscape isn’t a tool meant to be used throughout the day. Instead, it will be a tool that users check periodically - once a day, once a week, once a month, depending on their interests and goals, where they can catch up in an efficient way.

When users want to experience the flood, they can open the classic bird app and get it.

Enough about Why, lets talk about How

People agree we should have more control. But there is a lot of disagreement on how that control should be granted.

I’m hoping that by putting all our ideas for how in one place it will help our team decide on which ones we should try first.

Luckily for you, that is what we’ve spent the last couple of weeks diverging on!

Sources, Streams, Views and Zooming

I introduced sources, streams, and views in the 07-15 Product Update.

One of the issues with our original prototype is that the categorized feeds are too general. Hive clusters are massive, and just because someone is highly ranked in the Bitcoin community doesn’t mean they always tweet about bitcoin. Accounts that post about many varying topics make it much harder to reliably nerdsnipe yourself on a specific topic.

However, we need to start somewhere, and for users who are completely new to a certain topic, starting with a wide net like a Hive cluster could work well, with the right UX.

That is where zooming comes in.

The combination of sources, streams and views will allow users to zoom in and out of different topics and communities. This UX will follow the experience we shared in 07-15 Product Update:


Zooming could be done in a variety of ways while starting at a generalized feed

Those selections will inform future tweets and feeds from that same group to narrow down to only the most relevant information.

The zoom in process is how a user could go from a stream about longevity down to a specific stream about cyber-longevity. Then, even an empty stream is a signal for them - meaning there isn’t anything new to catch up on in that community you’ve crafted.

Zooming out will be just as important.

The zoom out experience is what I explored last week when I shared the Account Recommendations app where you can seed a stream with a handful of accounts and get a list of recommended users based on a couple of different methods.

If a user is interested in a topic, but only has a couple of accounts they know who tweet about it, we will need to provide ways for them to find the other people interested in that topic.

All of this will be in an effort to give users a better way to stay up to date on their interests than twitter lists, with their troubles hilariously displayed in this example:

Lucerne-style “Hard” Feeds

I’ve tweeted about lucerne before - and for good reason: it’s a great idea and tool from Linus.

Linus shared an important insight into his design:

“playing with filters is a primary user interface.”

“exploration and consumption are one and the same activity.”

This is a great start towards how I envision the zoom in/out experience I described above. “Exploration and consumption are one and the same.”

One of the problems with lucerne is that it doesn’t assist in the zooming in and zooming out. It doesn’t help with things like account recommendation and it doesn’t help you pick out keywords or select “model tweets.” Users must create Lucerne channels on their own.

Stream Exploratory Analysis

Lucerne is a great start, but it can be taken much further.

There is a lot of simple analysis (simple in this case mostly meaning it wouldn’t require ML models) that could assist stream building. It would largely look like a combo of the lowbies app and Account Recommdations app, along with more language focused things like analyzing popular keywords among sets of users.

In fact, I think keyword suggestions as a filter will end up being hugely important in Tweetscape streams, but I still need to put that hypothesis to the test with much more exploratory data analysis (EDA) over the next few weeks.

Changes Over Time

Another important interest and consideration is how conversations develop over time.

users want to be able to see changes in conversations over time. they want to see a topic “bubbling up” within a certain community. the focus here is not so much “top of the feed” staying up-to-date in real-time, but seeing trends. trends need to reach a certain gravity in order to become relevant and attractors on their own.

There are a variety of temporal preferences. Some people, especially more social users, will be interested in learning about which words are popular in the last hour, while others will be interested in reviewing the last month. my feeling is that if we manage to bring the temporal aspect down to something like “this is what happened up until 1h ago”, we already achieve a lot. that would allow us to do the parsing and correlation for a given stream configuration in the background.

a question that comes to mind is: how often to we imagine users opening tweetscape and for what purpose? in the dashboard scenario we need to have much higher frequencies than in the digest scenario.

Also let’s remember that part of our value propositions is “don’t miss the content that matters for you”. this leans to a scenario where the user can afford to not be present in the moment and look back at relevant events. in their usual modus operandi, users would come to tweetscape every now and then to “catch up” in an efficient way, but as soon as something “newsworthy” (some catastrophe) happens, they’d probably want to tune into a hashtag and receive live updates. imo what we are working on doesn’t really matter in that scenario because in that mo they’d actually want to experience the flood.

Graph DB Things

One enticing option is the graph db Neo4j. Julian has had good experience working with it before and it fits twitter data extremely well. Using a db to take advantage of that structure could end up being a huge boost for our development. This is something we will explore over the next few weeks.

Tweet Network Links and Graph Stuff

There are three different content linkages we are interested in:

  1. Social-social:

    • (accountA)-->[follows]--> (accountB)
  2. content-content:

    • (tweetA)-->[replies/retweets/quotes]-->(tweetB)
  3. social-content-social:

    • (accountA)-->[tweets]-->(tweetA)-->[replies/retweets/quotes]-->(tweetB)<--[tweets]<--(accountB)

The Account Recommendations app from last week leveraged the linkages in 1 and 3.

The graphs of 2 and 3 are what we are going to focus on, because they are the most readily available graphs due to twitter rate limits.

For reference, you can only run 15 follows look-ups calls per 15 minutes, each returning 1000 users. That means you can only pull the following of 15 users in 15 minutes, and that assumes they all follow under 1000 users, which isn’t a safe assumption.

With that in mind, we think there is a lot of good information to be pulled from the social-content-social network. Our account recommendations using that method have already received good feedback, and we have many more ideas to figure out better ways to determine which entities in that graph have the most gravity.

Things to figure out about the graph:

Some Dataset and Modeling Ideas

One place where we are going to have to get creative is in the datasets we use.

This builds even more on questions like “how do we decide which nodes to include in the graph?”

Atom-level Data

“atom-level” in this case means the base unit of our dataset. The obvious answer for this is the tweet, which is how we are starting. But, there are some other interesting options we can use as the base level as well, which could help mitigate for some issues like short reply tweets.

This is a section I’d love to get ideas for from people on twitter - so as always, if you have any other ideas, DM me! ****

Named Entity Recognition

Building custom named entity recognition for tweets is likely going to become very important for us to build streams that meet the high expectations of our users.

Using Current Twitter Communities/Lists as Labels

Andre started some of his NLP experiments by taking groups of users seeded around a specific topic, then using a model to predict which of those groups the users should fit into. In other words, it used the grouping as a label. This was with an extremely small dataset (6 accounts), but the results were still interesting.

To build on that, we are going to create a much larger dataset like this leveraging current twitter users’ lists and communities.

I’m excited to share the results from those experiments after we build those datasets and run more tests.

These lists and communities will also be great places for users to start zooming into their interests like I described above.

Some Other Things We are Thinking About

Thanks Again!

If you’ve made it this far, thanks for keeping up with us!

See you next week.