Blog

  • What does a cold war look like?

    Anthropic claiming to have built a generative AI so powerful at finding security vulnerabilities in software that they spin up a special project announcing how good it is and how you won’t be able to have it and note that it is available to the following companies and foundations:

    • Amazon Web Services (Publicly traded American-domiciled corporation)
    • Anthropic (Privately owned American-domiciled corporation)
    • Apple (Publicly traded American-domiciled corporation)
    • Broadcom (Publicly traded American-domiciled corporation)
    • Cisco (Publicly traded American-domiciled corporation)
    • CrowdStrike (Publicly traded American-domiciled corporation)
    • Google (Publicly traded American-domiciled corporation)
    • JPMorganChase (Publicly traded American-domiciled corporation)
    • the Linux Foundation (American-domiciled charitable foundation)
    • Microsoft (Publicly traded American-domiciled corporation)
    • NVIDIA (Publicly traded American-domiciled corporation)
    • Palo Alto Networks (Publicly traded American-domiciled corporation)

    I’m looking at this list and it doesn’t scream “Atlanticism is alive and well.”

  • Financial Spite

    Today I learned that Curve is a UK-based fintech owned by the UK-listed Lloyds Banking Group. I knew what they were offering for a while but never signed up because a card intermediary ruins many of the consumer protections enjoyed by card payments in the UK. But Curve has payment wearables. This would be incredibly convenient.

    In February 2026, and for the previous several years, I have used Apple Pay for pretty much everything. With Apple getting 0.15% of the transaction, you’re looking atโ€ฆ maybe ยฃ75 for the life of my usage?

    In March, I used Apple Pay twice. Both times I forgot my wallet. And although once was four taps, it was for a return journey between Zone 2 and Zone 1 of Transport for Greater Manchester, so only one transaction.

    I can’t excise myself from Yankee card payment providers yet (bring on an end to the Mastercard/Visa/Amex dominance, please), but I’ve managed to make more of my money stay in the UK with this change.

    I hope to be able to do more of this. And a Curve ring might be the backup for “I forgot my wallet” I otherwise used Apple Pay for this March.

  • A Letter to my Councillors: Preventing Rumoured Changes to Tram Service Salford

    tl;dr: Salford tram service is rumoured to be made worse to support the Trafford line. I wrote to my councillors to ask them to prevent this.

    Hello Councillors,

    Iโ€™m writing about a rumoured change to tram service into Salford that will be introduced as a temporary measure; however, the last temporary tram service change to Salford lasted over four years.

    I have heard that TfGM is planning to do the following to services into Salford:

    • Closing the Etihad to MediaCity line
    • Reducing the Ashtonโ€“Eccles service from five an hour (12 minutes) to four an hour (15 minutes)
    • Changing all trams on the remaining Ashtonโ€“Eccles line to double

    This is to support a service change on the Trafford line to run to Oldham.

    I am aware that the current electric network within the city centre for trams doesnโ€™t permit additional capacity; however, these two changes needlessly punish Salford. TfGM has frequently defended poor service to Salford with the argument that all services run at a 12-minute frequency while ignoring that the vast majority of destinations across the network have service by more than one line, with the richest terminus stations on the network served by several.

    Additionally, afternoon and rush-hour service from the stops at Salford Quays and Exchange Quay will get intolerably busy, as people are frequently passed up by two services every 12 minutes with a single tram; a 15-minute service with a double tram would make the situation even worse.

    I do not want to see this network change. The walk from where I live near Mariners Canal will have a substantially degraded tram service, and the walk to the nearest Trafford line station is a horrible zigzag diversion due to the Peel Buildings’ selfish fortifications.

    The current state of the tram network and lack of investment in reducing service bottlenecks in the centre zone should not be compensated for by an intentional deterioration of Salfordโ€™s already poor service provision.

    I would like to see Salford as a whole come out strongly against these changes to the tram network services and ideally block them.

    Warm regards,

    Andrew

  • Infinite Algorithm Consequences

    The more I think on the #ukPol push to ban under-16s from social media I think that Iโ€™m convinced problem is โ€œthe algorithmโ€.

    I had access to porn at a very young age, but it wasnโ€™t foisted upon me (barring the odd goatse link) in a continual scroll situation. When you finished reading your friendsโ€™ Geocities updates, or Livejournal reverse chronological feed you were done.

    Algorithmic recommendation is editorial, and if social media companies were made accountable for itโ€“would they then stop?

    There is a distinct lack of talking about the accountability gap in #ukPol. Amazon selling advertisements to marketplace sellers hawking dangerous/faulty counterfeit goods with no consequences. Influencers boosting their reach hawking damaging grey market tanning nasal spray using TikTokโ€™s built in systems to make a small commission.

    Both of these have large companies shrugging about the specific harms they cause as there was no human intervention on their part, so no accountability?

    Considering the bullshit spouted in many UK newspapers, somehow #ukPol sees better regulation of their editorial line than big American tech companies. Itโ€™s a gap that could quickly be fixed.

    And my theory is that theyโ€™d abandon the algorithm if the consequences were greater than the profit they gain from it.

    And maybe weโ€™d go back to seeing your friendโ€™s posts on Instagram, with a less addictive product. Once youโ€™ve caught up it has nothing left for you to do. So one can put their phone down.

    Originally posted as a thread on my Mastodon instance.

  • Intentionally Ringing in the New Year

    Many blog-post bits are spilled annually in retrospectives or plans for the coming year. Change this, lose that, etc. I’ve always found it hard to start a new habit in January-a bleak month in its own right-and I’m supposed to deny stuff that, until then, are comforting vices? No thanks; let’s break our habits in the shortest month of the year, thank you.

    Complete aside: I make my return to work in January go smoother by opening a panettone I purchased in December on the first day back. I call it Janettone, and it’s a habit I would like to share with the world. This year’s is a limoncello one, and it’s a lovely way to start a workday.

    This year, I’m looking to take something I’ve tried to apply for several months a bit further. I want to be more intentional about everything I do. Phone down more. Single-screening, not multi-screening. Being intentional about my actions doesn’t preclude me simply wanting to enjoy what I do.

    I want to be more present in my forties, and that’s what I intend to do.

  • POST Mortem: How Azure Application Gateways’s Missing 308 Killed Our Linked Data API

    In the Linked Data world, cool urls don’t change. That means in the RDF world you’re coining URIs that should be resolvable, and you pick the easiest one. Most people in the linked data world use http:// when coining URIs, even though today’s modern internet lives on https:// with upgrades handled by the service’s web stack.

    The Water Quality service launching on the environment.data.gov.uk portal has a RESTful, Hydra API, and it supports a combination of GET and POST methods to retrieve data. The most useful endpoints living at /data are POST, as they can receive GeoJSON bounding boxes to query both geographic and observation data, though some uses don’t require a body.

    In our testing we discovered that Python clients break when navigating the pagination of our service, but JavaScript works. WTF?

    The HTTP Redirect Status Code Landscape

    The 300 series of HTTP Status Codes defined in RFC 7231, with 308 added in RFC 7538, help people navigate the internet automatically when resources move, protocols change, and they’re all quite useful.

    • 301 (Moved Permanently): The old guardโ€”allows method changes
    • 302 (Found): Temporary and method-flexible
    • 303 (See Other): Forces GET (useful for POST-Redirect-GET pattern)
    • 307 (Temporary Redirect): Preserves method but temporary semantics
    • 308 (Permanent Redirect): The hero we needโ€”permanent + method preservation

    The issue It’s not a resource move (different URI), it’s a protocol upgrade (same resource, different scheme).

    Link Data APIs need 308

    Our canonical URIs often use http:// scheme as protocol agnostic identifiers; however transport security requires HTTPS. Content negotiation and RDF payloads reference http:// URIs, and we don’t select the protocol on the fly in our responses. Both the link headers and the Hydra pagination links in our endpoint use the same URIs to help people navigate our pagination setup.

    So you see how this is going to go? With POST getting redirected to GET will cause things to fall over when we erroneously get a 301 from Microsoft’s Application Gateway?

    The Azure Application Gateway Gap

    The current available responses for a HTTP to HTTPS upgrade in Azure’s Application Gateway service are 301, 302, 304, and 307. It’s missing the semantically accurate and method-preserving 308. Not only that, we can’t target specific paths or entry points in the service. We are forced to chose between wrong semantics (i.e. temporary redirects) or broken clients (POST gets converted to GET).

    Real-World Impact: Client Behaviour Broken

    Let’s be honest, the problem here is that Python is full of pedants (see: Pydantic), and the interpretation of RFC 7231 by the authors of itsย requestsย library have correctly implemented their redirect flag in theย post()ย method. When a 301 redirect is encountered,ย requests converts the POST to a GETโ€”which ourย /dataย endpoint doesn’t support, returning aย 405 Method Not Allowedย error.

    What should be a simple for loop navigating the link headers to collect a paginated dataset now requires custom redirect handling. What should be the simple contents of theย while next_url:ย loop.

    # What breaks with 301:
    response = requests.post(next_url, headers=headers, data="")
    # requests converts POST โ†’ GET on 301 redirect
    # Server responds: 405 Method Not Allowed
    # Pagination fails immediately

    Becomes the more convoluted:

    # Manual redirect handling to preserve POST method:
    response = requests.post(
        next_url, 
        headers=headers, 
        auth=auth, 
        data="", 
        allow_redirects=False  # Disable automatic redirect
    )
    
    # Handle redirect manually to keep POST
    if response.status_code in (301, 302, 307, 308):
        next_url = response.headers['Location']
        continue  # Re-POST to new URL

    Now I have to build my own redirect handling in Python because Microsoft has the semantics of their response codes wrong. I’m fine with it, but I want people to be able to be able to use our endpoint easily.

    Our front-end developers didn’t experience the same problem, which means that JavaScript’s fetch doesn’t do the same thing. This gives us an inconsistent API experience, and even with documentation being clear what’s going wrong with their code I’m still going to get support tickets that the thing is broken.

    Microsoft: Fix Your Shit

    Your Application Gateway redirect options aren’t complete. Give us a 308 code, allow us to be the pedants I want us to be. It would make a massive impact for the semantic web, improve our RESTful APIs, and follow modern HTTP patterns without breaking it for everyone else.

    Standards exist for a reason; it’s not a niche concern, as LLMs and Agentic AI usage becomes more and more common, having modern ways of accessing knowledge graphs and FAIR Data requires getting the semantics right everywhere โ€” including in our HTTP response codes.

    @Azure: gimme the response code 308.


    Note: I have a support request asking for this behaviour. I expect Microsoft to change nothing.

  • PostgreSQL is the best triplestore

    When folks want data they went it on a subject basis, making providing linked data that much easier. I have been exploring using Postgres with FastAPI and pydantic to serialize JSON-LD direct from SQL, to give users a familiar JSON RESTful API with content negotiated RDF baked in.

    Compared to the existing Jena-based API its throughput is two orders of magnitude faster, more reliable, and the data ingress doesn’t make me reconsider being in this domain.

    I hinted about this in February with a post about JSON-LD and prefixes. The API should be finished by the end of the month.

    (I’m on my second of hopefully two refactors.)

  • My own version of Rating of Perceived Effort

    Today a friend I’m coaching made a comment about how a run of theirs “Felt good. Maybe a bit too fast for an easy run but still below tempo”. I had to remind him that an easy run is where you feel good about it.

    Offhadedly I invented a scale of how I want him to feel after various types of runs:

    Run typeThe feeling I hope to inspire
    EasyFelt good, want to bring these vibes with me
    TempoAbsolutely knackered, I will collapse after I stretch
    EffortsI am in agony, and I want other people to feel this pain
    LongI want to end it all so this will stop

    Yeah, it’s glib, but if you’re walking away from an effort session and your cardiovascular system isn’t crying out for revenge you’ve failed.

  • The Art of Semantic Procrastination: Why I Use Blank Nodes for Concepts That Aren’t Mine

    In the linked data world, there is always a temptation to boil the ocean. When building out a new API or even just a new dataset, there are so many concepts (SKOS:Concept and otherwise) that are undefined and uncoined which provide human context and you feel the pressure to define it in your RDF – at the risk of taking on too much and straying outside of your authority. I’ve faced that in the past while building out a linked data service at Office for National Statistics, and having been burnt by the numerous kettles we had going to define everything semantically. I’ve been determined to not make that mistake again.

    The new API I’ve been developing for DEFRA is a Hydra/SOSA vocabulary based RESTful, content negotiated API for observational water quality data in England. The architecture of the service is FastAPI+PostGIS with a Next.JS frontend: the API doesn’t know anything about RDF; however it responds via JSON-LD by default with the JSON written in a way that people not familiar with RDF would appreciate.

    The main payload of the API is sampling points (sosa:FeatureOfInterest) have samples & samplings (sosa:Samplesosa:Sampling), which in turn have observations (sosa:Observation). Each of these levels have domain-specific types, classifications, and annotations which are necessary for the interpretation and discovery of these data; however no authoritative, public resource currently exists of these concepts.

    As someone who lives FAIR, linked data, but knows most consumers of data neither understand nor care about it, what should I do? The answer isn’t to avoid these concepts – it’s to represent them responsibly until someone with actual authority shows up.

    Procrastination by way of blank nodes

    My solution is deterministic blank nodes. Instead of coining URIs for concepts I don’t own, I generate consistent blank nodes that can be reconciled later when authoritative sources emerge. This keeps my API stable while avoiding coining URIs I may eventually regret. Let me explain.

    Previously I would have attempted to coin URIs for all my concepts, either at the dataset or higher level scope. For example, capturing the concept of running surface water from a river. In the source data for the API I have a table with a key and a label, the key acts as a notation.

    // You have no authority here, Jackie Weaver
    {
      "@id": "http://environment.data.gov.uk/id/sample-material/2AZZ",
      "@type": ["skos:Concept", "sosa:FeatureOfInterest"],
      "skos:prefLabel": "RIVER / RUNNING SURFACE WATER",
      "skos:notation": "2AZZ"
    }

    The issue is I currently don’t have responsibility of the concept scheme for sample materials, and it’s also not online. I know all the values, and I have a copy of it to make the service work but it’s not within the scope of delivery for the water quality API. So instead of speaking with authority I’ve shifted to getting it down in code first to serve it via the API. How about as a blank node?

    // Procrastinating via blank nodes
    {
      "@id": "_:sampleMaterial-2AZZ",
      "@type": ["skos:Concept", "sosa:FeatureOfInterest"],
      "skos:prefLabel": "RIVER / RUNNING SURFACE WATER",
      "skos:notation": "2AZZ"
    }

    The key here isn’t just using any blank node – it’s using a deterministic blank node identifier. By concatenating the concept scheme name with the notation (_:sampleMaterial-2AZZ), I ensure that every time this concept appears in my API responses, it gets the same blank node identifier.

    Note: This isn’t standard RDF blank node syntax – it’s my deterministic generation pattern from my source data. When serialized to actual RDF formats, these become proper blank nodes, but the consistent string ensures they all resolve to the same node across serializations. This isn’t just semantic pedantry – it has real practical benefits.

    When someone downloads multiple API responses and converts them to Turtle or N-Triples, all instances of _:sampleMaterial-2AZZ will be recognized as the same entity. Without this deterministic approach, you’d end up with multiple disconnected blank nodes for what should be the same concept, creating an unforgivable mess.

    Here’s what this looks like in practice – a real API response converted to Turtle:

    curl -sSL --fail 'http://localhost:8000/sampling-point/53130070/sample?skip=0&limit=3&sampleMaterialType=2AZZ&complianceOnly=false' | rdfpipe -i json-ld -o ttl -
    @prefix dcterms: <http://purl.org/dc/terms/> .
    @prefix hydra: <http://www.w3.org/ns/hydra/core#> .
    @prefix skos: <http://www.w3.org/2004/02/skos/core#> .
    @prefix sosa1: <http://www.w3.org/ns/sosa#> .
    @prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
    
    <http://localhost:8000/sampling-point/53130070/sampling/1506412> a sosa1:Sampling ;
        dcterms:type _:samplingPurpose-CA ;
        sosa1:hasFeatureOfInterest <http://localhost:8000/sampling-point/53130070> ;
        sosa1:hasResult <http://localhost:8000/sampling-point/53130070/sample/1506412> ;
        sosa1:resultTime "2001-08-08"^^xsd:date ;
        sosa1:startTime "2000-08-18T12:20:00"^^xsd:dateTime .
    
    <http://localhost:8000/sampling-point/53130070/sampling/1510110> a sosa1:Sampling ;
        dcterms:type _:samplingPurpose-CA ;
        sosa1:hasFeatureOfInterest <http://localhost:8000/sampling-point/53130070> ;
        sosa1:hasResult <http://localhost:8000/sampling-point/53130070/sample/1510110> ;
        sosa1:resultTime "2000-10-05"^^xsd:date ;
        sosa1:startTime "2000-09-20T12:00:00"^^xsd:dateTime .
    
    <http://localhost:8000/sampling-point/53130070/sampling/2303318> a sosa1:Sampling ;
        dcterms:type _:samplingPurpose-CA ;
        sosa1:hasFeatureOfInterest <http://localhost:8000/sampling-point/53130070> ;
        sosa1:hasResult <http://localhost:8000/sampling-point/53130070/sample/2303318> ;
        sosa1:resultTime "2001-06-07"^^xsd:date ;
        sosa1:startTime "2000-11-29T00:01:00"^^xsd:dateTime .
    
    <http://localhost:8000/sampling-point/53130070/sample/1506412> a sosa1:Sample ;
        sosa1:isResultOf <http://localhost:8000/sampling-point/53130070/sampling/1506412> ;
        sosa1:isSampleOf _:sampleMaterial-2AZZ,
            <http://localhost:8000/sampling-point/53130070> .
    
    <http://localhost:8000/sampling-point/53130070/sample/1510110> a sosa1:Sample ;
        sosa1:isResultOf <http://localhost:8000/sampling-point/53130070/sampling/1510110> ;
        sosa1:isSampleOf _:sampleMaterial-2AZZ,
            <http://localhost:8000/sampling-point/53130070> .
    
    <http://localhost:8000/sampling-point/53130070/sample/2303318> a sosa1:Sample ;
        sosa1:isResultOf <http://localhost:8000/sampling-point/53130070/sampling/2303318> ;
        sosa1:isSampleOf _:sampleMaterial-2AZZ,
            <http://localhost:8000/sampling-point/53130070> .
    
    [] a hydra:Collection ;
        hydra:member <http://localhost:8000/sampling-point/53130070/sample/1506412>,
            <http://localhost:8000/sampling-point/53130070/sample/1510110>,
            <http://localhost:8000/sampling-point/53130070/sample/2303318> ;
        hydra:totalItems 129 ;
        hydra:view [ hydra:first <http://localhost:8000/sampling-point/53130070/sample?skip=0&limit=3&sampleMaterialType=2AZZ&complianceOnly=false> ;
                hydra:last <http://localhost:8000/sampling-point/53130070/sample?skip=126&limit=3&sampleMaterialType=2AZZ&complianceOnly=false> ;
                hydra:next <http://localhost:8000/sampling-point/53130070/sample?skip=3&limit=3&sampleMaterialType=2AZZ&complianceOnly=false> ] .
    
    _:sampleMaterial-2AZZ a skos:Concept,
            sosa1:FeatureOfInterest ;
        skos:notation "2AZZ" ;
        skos:prefLabel "RIVER / RUNNING SURFACE WATER" .
    
    _:samplingPurpose-CA a skos:Concept ;
        skos:notation "CA" ;
        skos:prefLabel "COMPLIANCE AUDIT (PERMIT)" .

    Notice howย _:sampleMaterial-2AZZย appears once in the graph but is referenced by multiple samples – exactly what we want.

    When the kettles come out: reconciliation without regret

    The beauty of this approach is that when the authoritative concept scheme eventually goes online (and it will, because I’m also building that service), I can simply add reconciliation triples without breaking anything. This is where semantic versioning becomes your friend – adding triples is a patch-level change at most. It neither changes the shape of the API’s JSON, nor previously coined URIs.

    // Future state - same identifier, now with authority
    {
      "@id": "_:sampleMaterial-2AZZ",
      "@type": ["skos:Concept", "sosa:FeatureOfInterest"],
      "skos:prefLabel": "RIVER / RUNNING SURFACE WATER",
      "skos:notation": "2AZZ",
      "skos:exactMatch": "http://environment.data.gov.uk/def/sample-material/2AZZ",
      "rdfs:definedBy": "http://environment.data.gov.uk/def/sample-material/"
    }

    Now I can fire up those kettles I avoided earlier. The blank node stays the same, existing API consumers continue to work, but new consumers can follow the skos:exactMatch to the authoritative source. Cool URIs don’t change, and neither will these deterministic blank nodes.

    This approach scales beautifully across different concept schemes. Whether it’s determinands that eventually align with QUDT vocabularies, geographic regions that get proper Ordnance Survey URIs, or measurement units that find their way into authoritative registries – the pattern remains the same. Add the reconciliation triples when you have them, leave the blank nodes as stable anchors within the service.

    // And it even supports multiple reconciliation targets
    {
      "@id": "_:sampleMaterial-2AZZ",
      "@type": ["skos:Concept", "sosa:FeatureOfInterest"],
      "skos:prefLabel": "RIVER / RUNNING SURFACE WATER",
      "skos:notation": "2AZZ",
      "skos:exactMatch": "http://environment.data.gov.uk/def/sample-material/2AZZ",
      "rdfs:definedBy": "http://environment.data.gov.uk/def/sample-material/",
      "skos:closeMatch": "http://purl.obolibrary.org/obo/ENVO_00000022"
    }

    In a perfect world, every concept would have an authoritative URI from day one. In the real world, sometimes the most responsible thing you can do is admit you’re not the authority – yet. Deterministic blank nodes let you build useful services today while keeping the door open for proper reconciliation tomorrow. It’s procrastination with a purpose.

  • Data Are Cool: Disseminating My Online Safety Act Compliance

    The Online Safety Act is a piece of work, and not a good one either. Ofcom is not an excellent or communicative regulator. Because they are responsible for both setting the rules and components of enforcement they won’t provide advice which would prejudice their future enforcement. That said, there is a quick test to see whether a given website is in scope of the Online Safety Act:

    1. Does the service have links with the United Kingdom?
    2. Is the service a user-to-user-service?
    3. Do you provide a search service?
    4. Does your online service publish or display pornographic content?
    5. Do any of the enumerated exemptions apply?

    I’m going to cover these in turn, but the tl;dr is that I don’t believe Data Are Cool (DAC) is in scope of the Online Safety Act.

    Does the service have links with the United Kingdom?

    There are two components to this test, first is whether UK users are a target market, and second is whether the service has a significant number of UK users. If you hit either one of them you’re in scope of the Online Safety Act.

    The UK as a target market test

    From the first page of Ofcom’s Check if the regulations apply to your online service form has the following bullets which helps make a determination for whether the UK is a target market:

    Your online service is likely to have links with the United Kingdom if:

    • Is designed for UK users;
    • Is promoted or marketed toward UK users;
    • Generates revenue from UK users either:
      • directly (e.g. via subscriptions or sales); or
      • indirectly (e.g. through advertising to UK users, including people or organizations);
    • Includes functionalities or content that is tailored for UK users; or
    • Has a UK domain or provides a UK contact address and/or telephone contact number.

    I don’t believe DAC is designed for UK users, it is not promoted or marketed, it generates no revenue, none of its content is tailored to UK users, and it doesn’t have a UK domain. It’s user base are my friends and family. There is no sign up capabilities.

    The significant number of UK users test

    The second component of this test is whether there are a significant number of UK users. Ofcom flat out refuse to define even by orders of mangitude what a significant number of UK users is.

    Candidly, DAC has 10 user accounts. I would reckon a significant number of UK users is well in the thousands, if not hundreds of thousands.

    UK links conclusion?

    Ironically as I read this, DAC doesn’t demonstrate “links with the United Kingdom”. It neither has links to the United Kingdom, nor has a significant number of UK users. Using the Regulation Checker form, the answer immediately becomes “No, the Online Safety Act is not likely to apply to your online service.” This is a good start, but let’s check the remaining tabs anyways.

    Is the service a user-to-user-service?

    Ofcom defines a user-to-user service as “an online service that allows its users to interact with each other.”

    DAC does this. It’s a social network. It’s a user-to-user service. Moving on.

    Do you provide a search service?

    Ofcom defines a search service as “online service which is, or includes, a search engine. A search engine is a feature which enables users to search more than one website and/or database.”

    The nature of Mastodon/Fediverse is that it is a search service. It’s a federated social network. It’s a search service; but also one that requires people to log in to search the Fediverse.

    Does your online service publish or display pornographic content?

    DAC does not have any alts (aka pornographic focused accounts) on the service; however some of DAC’s adult users have subscribed to the feeds of users elsewhere in the Fediverse who do post pornographic content. So we don’t publish it, but we do display it to logged-in users.

    Exemptions?

    DAC isn’t exempt from the Online Safety Act in the form of their carve outs. Though there’s a small amount of snark from me because it exempts UK Parliament’s websites from the Online Safety Act. The UK Parliament’s petition website clearly would otherwise meet the threshold of a user-to-user service with UK targeting, and UK users in the millions. Sauce for the goose? Natch.

    Anywho…

    Conclusion

    Going back through these tests, I don’t believe DAC is in scope of the Online Safety Actmainly because I don’t believe it meets the thresholds established as “links to the United Kingdom”. It feels weird to phrase it this way, but…

    1. DAC isn’t designed for UK users (it isn’t designed beyond being a Mastodon instance);
    2. DAC isn’t promoted or marketed to UK users (it’s not promoted or marketed at all);
    3. DAC doesn’t generate revenue from UK users (I fund it out of my own pocket);
    4. DAC doesn’t have content tailored to UK users (it’s a social network for my friends and family);
    5. DAC doesn’t have a UK domain (it’s a vanity domain outside the country TLDs); and
    6. DAC doesn’t have a significant number of UK users (it has 10 users and I suspect significant is in the order of hundreds of thousands).

    If Ofcom comes knocking, I’ll engage in good faith with them. Especially since I still plan on doing the Extra-Illegal Content/Harms risk assessments they require; however I won’t be killing my service because of the Online Safety Act.

    I’m not going to be complacent about this, but I’m not going to worry about it either.

    Ofcom, if you’re reading this and want to get in touch, find all the details to contact me at Data Are Cool: about page.