Skip to main content

My quick thoughts, back stage, and rants as I try to Teach kids about the Web while learning how to help others build a better Web.

Greg McVerry

At what point in history did the syllabus become more a tool for external forces outside of the classroom rather than a tool to guide the learners in the classroom?

Greg McVerry

Following People or Feeds in the #IndieWeb #mb #DoOO #edtechchat #literacies

2 min read

I am scrolling through history (h/t to Kevin Marks for reminding of the ccurated posts by danah boyd) as we discuss how best to follow people in social readers on the IndieWeb.

Tantek Çelik has suggested nobody ever on the history of the web wants to follow feeds. danah seemed  to agree in 2004.

Tantek suggested a one button push follow that people have come to love on social media. In fact I have been documenting dicoverability and following on Tumblr and it is amazing.

The problem is the firehose. Social media silos use proprietary data and algorithms to reduce the chronologocal feeds. Tumblr and facebook decide what I see.

On Twitter I could never follow the chrnologogical feed of the thousands I follow. A follow on Twitter is a h/t nothing more. Instead as a human I have to curate my feed using Tweetedeck into 37 different feeds (columns by hashtag).

On Slack, IRC, Telegram, we have channels.

Nobody wants to follow my firehose,or Aaron Parecki's or Chris Aldrich's ....your phone might explode. Between the three of us you may get over 100 updates each day...and that is a low estimate.

What can be done for following and discovering of people? Can we follow people and not feeds while avoiding the firehose? Well bunch of ideas floating around chat:

  • leveraging topical webrings
  • creating h-card directories of people to follow on websites
  • creating a public h-card directory
  • encouraging the use of p-category with key topics/tag in an h-card.
  • adding preffered feeds in your h-card.
  • adding logic to social readers so if you follow someone and they have a feed the same name as a channel you get auto subscribed.

I get not liking feeds. I threw out my RSS feeds after a decade. It just got unmanageable, broken links, impoprted OPML files. So I started rebuiling feeds more my social reader. So much work. I got about half way through, my channels...and then stop...keep meaning to go back....but you know....

Grooming feeds, a crappy experience since 2004.


Greg McVerry

Attention all @wordpress users, after testing we can say post-kinds is 5.0 compatible BUT do not turn on Gutenberg editor. Post-kinds and other plugins WILL break

Greg McVerry

why many of us folk do the way we do on . Had amazing opportunity to chat with talented @eddiehinkle on episode here:
We Describe why web matters.

Greg McVerry

Gonna take a communbity to hold that back scratcher: @Tumblr to the #IndieWeb

9 min read

Import–needs rock solid LiveJournal-clone and Tumblr support if your site is to serve as an archive. I don’t know if there even is a working Wordpress plugin to import from LJ or Dreamwidth. The best-supported Tumblr->Wordpress importer is actually better than most standalone Tumblr backup tools, but it still mangles video posts/embeds. It’d also be cool to have import tools for AO3, Deviantart, and other major fanwork repositories.


Are there export tools that you can use to get Tumblr posts out as JSON data. We used to tell people to use the deprectated API. I do not know if that still works. People have done some amazing migration work. Stop by the IndieWeb dev chat channel to brainstorm.

Going Tumblr to WordPress is well documented and we are trying to build WordPress themes for the artistic crowd..

separate out posts I created, posts I added comments to, and posts I just shared via reblog. A nice addition would be the ability to copy Tumblr tags to a metadata field that’s separate from Wordpress tags–WP tags tend to be organizational, whereas on Tumblr, tags are often a sidechannel for comments that don’t propagate on reblog, thus filled with all sorts of crap.

Many people do this on their websites. I am using Known. i can soret by tag (see my footer) or by post Type (drop down menu) or both, or search. Similar WordPress plugins exist and just works out of the box this way.

On that note, Itch #3 is mass-organization tools. Select all posts that fit certain criteria and do a mass edit on their tags, categories, post types, or other taxonomy data. Lots of fandom folks have years or decades worth of content from various sites, making organizational tasks highly impractical to do manually. I’ve dicked around with a few Wordpress mass-edit plugins, but none of them seemed to work that well.

Look at thre HTML. Do tags have a link with rel=tag? Modern microformats prefer p-category but parsers recognize both.

Not sure how well the existing backfeed tools support Tumblr notes, but for fandom to bite, the Tumblr support oughta be pretty damn slick. And the cross-posting should ideally support all the features of a native Tumblr post, because by god, we will use them, and we will notice if an expected one is missing. I can spot IFTTT cross-posts from AO3 without even reading text, and tbh my eyes usually skip right over them, unfair as that may be.

Got post types? We got all the post types and then some. Plus people be experimenting every day. Different platforms offer more post types. In general the most accepted are article, note, repost, reply, photo. Every platform supports these at a minimum, but the you have eat, listen, read, watch, jam, chicken, checkin, bookmark, and many more.

We don't write specs first. We do first, and when a bunch of folks end up doing the same do then we write it down.

If this project extends to feed readers/aggregators, the embrace of multi-site cross-posting implies a need for deduplication. Preferably getting rid of Tumblr’s charming “barf the full post back out onto your dashboard every time someone you’re following shares/responds to it” behavior in the process. For fandom use, it’ll need a blacklist feature. And I’d love some more heavy-duty filtering, selective subscriptions (like to just one tag of a blog), creating multiple feeds based on topic or on how much firehose you want

I would love to hop on a video call and show you how the social readers work. They can do so much! Slick.

This may be a personal itch, but at least for personal archiving needs, I’m sick, sick, sick of the recency bias that’s eaten the internet since the first stirrings of Web 2.0. Wikis are practically the only sites that have escaped chronological organization. It would be cool to have easily-manipulated collections with non-kludgey support for series ordering, order-by-popularity, order-by-popularity with a manual bump for posts you want to highlight, hell even alphabetical ordering. None of these things are remotely unsolved problems, but they’re poorly supported on the social-media silos most people’s content lives on these days. Fandom’s suffered from this since at least the days of LiveJournal, which had the ominous beginnings of what’s since become the Tumblr Memory Hole. Relentless chronological ordering + the signal-to-noise ratio of any space with regular social interaction = greatest hits falling down the memory hole unless a community practices extensive manual cataloguing. Hell, LJ fandom did practice extensive manual cataloguing, but even within that silo, there was so much decentralization that content discovery was shit if you didn’t know the right accounts to search through. Like, fuck, at least forums bump threads to the top if they’re still active–LJ and blogs have the same “best conversation evar falls inexorably off the map as new posts are added, no matter how active it is” problem that InsideTheWeb forums did in 1999. (Anyone else remember InsideTheWeb? AKA 13-year-old me’s first experience with platform shutdown, frantic archiving attempts, and massive data loss. Fun times.) Tumblr and Twitter, meanwhile, spam you with duplicates of the original post every time someone you’re following replies to/shares it, a key component of the endless firehose of noise drowning out any attempt to hang on to the signal.

The whole concept of IndieWeb fails to address (and might even worsen) what I suspect is the core dysfunction of social media. Which is the degradation of community spaces, and their replacement with a hopeless snarl where all content lives in individual accounts. There are a lot of weird effects that arise when the “social” sphere is built entirely upon the one-on-one connections created when someone subscribes to another account or gives someone else permission to view their restricted posts. Echo chambers, shame mobs, out-of-context remarks going viral, popular accounts setting off harassment storms whenever they disagree with someone, the difficulty of debunking hoaxes once they’re out in the wild… all of those are either created or made much, much worse by the lack of any reasonable, stable, shared expectation of who a post’s audience is.

This is true and I have been guild of being insentive due to context collapse myself. especially arounnd IndieWeb advocacy and fogetting the work, and the privledge required to have the ime and treasure for this work, that is involved.

But I think you are off a bit. People are nicer on their own domain. Something about owning the space where you speak from seems to reduce the shouting. Holistic tech is harder but it leads to better democracy when compared to prescriptive technology.

We are also experimenting with bringing back webrings to create a sense of protected or curated community. Fandom groups could have a collective list and a Code of Conduct.

We are also experimenting with restricted posts by requiring IndieLogIn, meaning I invite people to see restricted posts either by ring membership, where you login with yoru domain or privately where I share just with domain. Can people still screenshopt and share? Yes, the world has always had assholess. Web can't fix that.

Basically, if “own your content and host it on your site” also applies to your comments, interactions, etc, it starts running counter to one of the strengths of the Old Web. Which was community contexts where you explicitly weren’t posting to your own space or addressing everyone who might be looking at the main clearinghouse of all your different stuff. You were posting to the commons shared by a particular group with a particular culture and interests, not all of whom were people you’d necessarily want to follow outside that limited context, some of whom you might disagree with or dislike, but in any case you knew what audience you were broadcasting to. You knew what the conversation was, how similar conversations had gone in the past, and the reputations of all the main participants–not just the ones you yourself would subscribe to and the ones attention-grabbing enough to get shared by the people on your subscription list. And you weren’t spamming all your other acquaintances with chatter on a topic they weren’t interested in.

A lot of philosophical disucssions going on right now about webmentions, ethics, and displays. Had a few sessions at our last IndieWeb Camp in Berlin.

Shared spaces can also establish whatever social norms they need and moderate accordingly. (Plus, plurality of spaces = plurality of norms for different needs, which would solve a LOT of what’s currently ailing fandom.) Peaceable enforcement of a code of conduct, beyond the “minimum viable standard” sitewide abuse policy, is fundamentally impossible on social media, where individual muting is the closest thing you can get to moderation. That + unstable audience = any social norms that exist are so unenforceable it turns people into frothing shame-mob zealots, ratcheting up the coercive pressure on everyone the more it fails to work on the handful of unrepentant assholes who would’ve been permabanned from any self-respecting forum within a week. Moving onto personal sites with beefed up syndication/backfeed capabilities ain’t gonna fix that. Meanwhile the truly heinous dickweeds who’d ordinarily run afoul of the sitewide abuse policy will have the same capabilities, minus any risk of getting banned.

IndieWeb itself is a group of organized bloggers. We also connect in real life events, on Slack/IRC, and on wikis...You know just like fandom. We have a code of conduct. It covers both real life and online spaces even thoguh we have no central organization.

Also see earlier comment about webrings.

That said, one potential point of friction is that fandom is far more pseudonym-centric than the devs and tech hobbyists who’ve coalesced around IndieWeb so far.

More so for saftey than just hobbyist. We built tools to allow for psuedonyms:

I am late for work...Really want to tkeep dialogue going.

Greg McVerry

Twitter also sells your privacy. I rely pretty much on students who can answer the question, "My url is...." but many courses organize on twitter just to name a few.

Greg McVerry

Team behind Samizdat: Journal of Blogging and Social Media Research continues to grow. Need experts from all disciplines and corners. Please lend expertise. Reclaim our research.

Greg McVerry

Feed the Zeke 🏈

Greg McVerry

Using this search engine built by @snarfed and @csweike of 2,300 sites there are 154,00 results for [learn microformats] 9 results for [learn h-card] 434, 000 for [how to h-card] over 24 million for [problem h-card -issues] rough results but interesting knowledge graph

Greg McVerry

Scoping Out Basics of #IndieWeb Search

4 min read

Over the weekend I met with the CEO of BLUR Search Technologies . Jaime is also my brother-in-Law, and has  sponsored IndieWebCamp NYC in 2018. We mainly gathered for Thanskgiving, the second Thanksgiving, and finally leftovers.

As we all played clean the fridge we snuck away to scope out a possible search engine for the IndieWeb Community. Blur Search Technologies will donate time and technology but we will need some help in implementing some building blocks  IndieAuth, Post Type Discovery Algorithm, etc.

We will also check out and see how much of I think it will be a ton, plus we have data already to play with. 

Opt-in with IndieAuth

Yes many of us publish openly, even with liberal licenses that allow for remixing and forking but this does not mean we want the data scraped, parsed, and sorted. The right thing and what you have the right to do are not always the same.

Thus the first feature we would need to have would be an opt-in service using the IndieAuth protocol. Meaning the only website data the search engine would collect would be that which you authorized.

Grant Richmond has done this well with the h-card directory. Speaking of which...

Types of Tables

We first discussed what types of tables and data are available to fill these tables. We did not decide if each top level h* would get a table or we would the h* as the first column.

  • h-entry
  • h-review
  • h-feed
  • h-card
  • h-cite
  • h-feed

Again we looked at Grant Richmond's UI, but the h-card directory would get parsed as soon as someone joins the search engine.

Indexing Sites

A feed reader could then be used to index sites. Using the post type discovery algorithim and existing microformats parsers we can add columns for all the properties used in:

For large blogs with decades and gigs of post we will index the pages overtime in the background. Adding sites quickly gets more expensive even quicker.


Some queries, like those involving people would get hard coded into the search engine. You could ask:

  • Where is @x? -Then the search engine would qury the chekin posts for that person and tell you the last known location
  • Who is @x? Will present the the h-card of a person. If there is a p-note or p-summary present then a tagline will appear in the results.
  • What is @x Mastodon name? Queries the directory and finds the rel-me link
  • What (movie, book, podcast) is most popular? This would query the frequency of "p-name" in the h-cite" of any watch, read or listen post (or whatever is the corect answer, much of this is new). These queries could of course be date restricted.

Keyword Search

The keyword search would look for exact matches in:

  • first p-name after the h-*
  • p-category or rel="tag"
  • content
  • h-cite

These could then be weighted in some form of ranking

  • +100 if keyword in the p-name and alo p-category
  • +50 if p-name
  • +25 if p-category
  • +10 for each exact match in the content

Next Steps

We needed to scope out an MVP which this blog post now completes. Next we will start working on testing the different microformats to json parsers to populate tables with dynamic columns to see which can be static columns.

We will start with my blog but need a few other volunteers. Find me in chat if interested.

Update:Ryan Barret

reminded me of, which already has data to muck around in and a prior example of some crawling technology.

We also need help from people with experience using the IndieWeb building blocks.

Big Questions?

Can we add a micropub client so if you are signed into the search engine you can reply and interact with the results?

Can we develop APIs so people could add the search engine natively to their blogs for both local and network searches?

Could a private search enging help protect vunerable blogging communities by controlling not only who can use the search engine but giving uvers full control over what data is parsed?

Overall I think an opt-in search engine, where you can add and subtract your data as easy as every other time you use IndieLogIn will be great for the community. Search technologies combined with existing building blocks the already created such a search tool would be useful to other consumable feeds in the as well.