Understanding how algorithmic systems shape what users see, know, and do -- from recommendation feeds to search ranking to credit scoring to hiring software. Covers the mechanics of recommendation systems, algorithmic bias and its sources, personalization's effects on information diets, opacity and accountability, AI limitations (hallucination, confident wrongness), and the human-in-the-loop question. Use when a learner needs to think critically about why particular content reached them.
Algorithmic awareness is the discipline of noticing that the content reaching you was selected by a system optimizing for something, and asking what that something is. Most online experience is now mediated by recommendation algorithms: what you see on social media, what videos YouTube queues, what appears at the top of search results, which products Amazon pushes, which job postings surface, which loan offers arrive. The systems are not neutral; they are trained to produce specific outcomes, and those outcomes are not always aligned with yours. This skill draws from Safiya Noble's Algorithms of Oppression, Cathy O'Neil's Weapons of Math Destruction, and the algorithmic accountability research community.
Agent affinity: noble (algorithmic bias, power asymmetry), palfrey (institutional framing), rheingold (user-facing strategies)
Concept IDs: diglit-recommendation-systems, diglit-algorithmic-bias, diglit-ai-limitations, diglit-data-collection
The word "algorithm" has two meanings that get conflated.
Narrow technical meaning: A finite sequence of precise steps that produces an output from an input. Sorting a list is an algorithm. Computing a checksum is an algorithm.
Broader popular meaning: A proprietary, often machine-learned system that makes decisions about what users see or what happens to them. "The Facebook algorithm" or "the hiring algorithm." This is usually a pipeline of statistical models trained on historical data, optimized for a business objective.
This skill is about the second. The systems we call "algorithms" in everyday speech are not neutral calculators; they are trained-to-maximize machines whose training objectives are almost always different from what users would consciously choose.
At a high level, a recommendation system does this:
The critical question is step 3: what is the target metric?
Recommendation systems are trained to optimize a specific measurable outcome. Common choices:
These metrics are proxies for "value to the user" but they are imperfect proxies. Outrage drives clicks. Anxiety drives engagement. Polarizing content drives retention. A system trained to maximize engagement will surface engaging content whether or not it is true, healthy, or good for you.
This is not a conspiracy. No one at a platform company sits in a room deciding to amplify misinformation. The objective function does it automatically. The engineers would need to actively override the optimization to stop it.
Personalization means different users see different things. The systemic consequence is that your information environment becomes progressively tuned to your past behavior.
Eli Pariser's 2011 concept: the personalized web produces an information environment shaped by your clicks, where dissenting or unfamiliar viewpoints are filtered out not by a human editor but by an algorithm trained to predict your preferences.
A community whose members primarily interact with each other, reinforcing shared beliefs and suppressing counter-evidence. Social platforms naturally produce these because homophily (the tendency to connect with similar others) is strong in human networks.
The empirical literature is mixed. Early filter-bubble claims were sometimes overstated -- most people still encounter diverse content, and social platforms can actually expose users to more diverse views than their offline networks would. But the effect is real for heavy users, and the asymmetry of amplification means extreme content spreads disproportionately.
Bias in algorithmic systems is not a bug in the code. It is a feature of how the systems are built.
Training data bias. The system learns from historical data. If the history reflects bias, the system reproduces it. A resume screener trained on a company's historical hires learns who the company historically hired -- which may not be who they should have hired.
Proxy bias. The system uses variables that are correlated with protected attributes. ZIP code correlates with race in the U.S., so lending models that use ZIP code may produce discriminatory outcomes even when race is not an input.
Feedback loop bias. A system's predictions affect the world, which produces the next round of training data. Predictive policing models direct police to areas where crime was previously reported, leading to more reports in those areas, reinforcing the model's prediction.
Measurement bias. The target variable itself is biased. "Engagement" measures what users clicked; it does not measure what users found valuable. Optimizing for the former does not give you the latter.
Evaluation bias. Models are tested on datasets that do not represent the full user population. Facial recognition systems performed dramatically worse on darker-skinned faces for years because the test sets were overwhelmingly lighter-skinned.
In 2026, "algorithm" increasingly means "large language model" or "generative AI." Understanding what these systems can and cannot do is essential digital literacy.
Hallucination is the technical term for when an LLM generates factual content that is wrong. It is not lying -- the model has no concept of lying -- but the output is not reliable and must be verified against external sources.
The correct relationship with generative AI is partnership, not delegation. The AI drafts; the human verifies. The AI suggests; the human decides. Delegating decisions entirely to AI systems produces exactly the failure modes this skill describes.
Most algorithmic systems are opaque: you cannot see the code, you cannot inspect the training data, and you often cannot even know when a decision was made by a machine versus a person.
When you are rejected for a loan, a job, or an insurance claim, you have very limited means to understand why or to challenge it. "The algorithm decided" is not an answer in a democratic society.
Some jurisdictions now require algorithmic impact assessments, explanation rights, or human review of automated decisions. The EU AI Act (2024) classifies AI systems by risk and imposes obligations on high-risk uses. Enforcement is still developing.
information-evaluation.data-privacy.computational-literacy.When an algorithmic system affects your experience, ask: