{"id":224996,"date":"2026-04-20T08:35:14","date_gmt":"2026-04-20T08:35:14","guid":{"rendered":"https:\/\/www.9senses.ai\/?p=224996"},"modified":"2026-04-21T04:57:44","modified_gmt":"2026-04-21T04:57:44","slug":"a-confident-confabulator","status":"publish","type":"post","link":"https:\/\/www.9senses.net\/de\/a-confident-confabulator\/","title":{"rendered":"A confident confabulator"},"content":{"rendered":"<div class=\"et_pb_section_0 et_pb_section et_section_regular et_flex_section preset--group--divi-section--divi-box-shadow--default preset--group--divi-section--divi-sizing--default preset--group--divi-section--divi-sizing--hsj9uxo--default\"><\/div>\n\n<div class=\"et_pb_section_1 et_pb_section et_section_regular et_flex_section preset--group--divi-section--divi-box-shadow--default preset--group--divi-section--divi-sizing--default preset--group--divi-section--divi-sizing--hsj9uxo--default\">\n<div class=\"et_pb_row_0 et_pb_row et_pb_row_3-4_1-4 et_block_row et_block_row_3-4_1-4 preset--group--divi-row--divi-box-shadow--default preset--group--divi-row--divi-sizing--h1k452m--default\">\n<div class=\"et_pb_column_0 et_pb_column et_pb_column_3_4 et_block_column et_pb_css_mix_blend_mode_passthrough preset--group--divi-column--divi-box-shadow--default preset--group--divi-column--divi-sizing--default preset--group--divi-column--divi-sizing--hsj9uxo--default\">\n<div class=\"et_pb_text_0 et_pb_text et_pb_bg_layout_light et_pb_module et_block_module preset--group--divi-text--divi-box-shadow--default preset--group--divi-text--divi-sizing--default preset--module--divi-text--default\"><div class=\"et_pb_text_inner\"><p>Du hast nicht ausreichende Berechtigungen, um auf diesen Inhalt zuzugreifen.<\/p>\n<\/div><\/div>\n\n<div class=\"et_pb_post_title_0 et_pb_post_title et_pb_bg_layout_light et_pb_module et_flex_module preset--group--divi-post-title--divi-box-shadow--default preset--group--divi-post-title--divi-sizing--default preset--group--divi-post-title--divi-sizing--hsj9uxo--default preset--module--divi-post-title--y6glixiooo\"><div class=\"et_pb_title_container\"><h1 class=\"entry-title\">A confident confabulator<\/h1><\/div><\/div>\n\n<div class=\"et_pb_text_1 et_pb_text et_pb_bg_layout_light et_pb_module et_flex_module preset--group--divi-text--divi-box-shadow--default preset--group--divi-text--divi-sizing--default preset--module--divi-text--default\"><div class=\"et_pb_text_inner\"><p>AI hallucinations aren't random \u2014 they cluster, systematically and predictably, in the topics you cannot independently verify.<\/p>\n<\/div><\/div>\n\n<div class=\"et_pb_text_2 et_pb_text et_pb_bg_layout_light et_pb_module et_flex_module preset--group--divi-text--divi-box-shadow--default preset--group--divi-text--divi-sizing--default preset--module--divi-text--default\"><div class=\"et_pb_text_inner\"><p>Ask an Artificial Intelligence Model about the French Revolution, and you get a competent summary. Ask about a minor 18th-century poet, and you'll still get a competent-sounding summary \u2014 except this one might be partly invented, with fabricated quotes and a biography stitched together from similar figures. Both answers are provided with the same polished authority of an expert in the field. One is real, one is mostly fiction.<\/p>\n<p>This isn't a random failure; AI hallucinations are not evenly distributed across topics. They cluster, systematically and predictably, in exactly those areas where users know less and are thus least able to detect them. This makes them one of the biggest problems of AI.<\/p>\n<p><span>So in short: the more thoroughly a subject has been written about and made available to the AI, the more reliably a model can talk about it. The further a query drifts into specialist territory, the more the output shifts from recall to invention. This is not a calibration problem that better training will solve \u2014 it's intrinsic to how these systems work. <\/span><span>The pattern shows up consistently wherever it's been measured and is confirmed by many studies.<\/span><\/p>\n<\/div><\/div>\n<\/div>\n\n<div class=\"et_pb_column_1 et_pb_column et_pb_column_1_4 et-last-child et_block_column et_pb_css_mix_blend_mode_passthrough preset--group--divi-column--divi-box-shadow--default preset--group--divi-column--divi-sizing--default preset--group--divi-column--divi-sizing--hsj9uxo--default\">\n<div class=\"et_pb_heading_0 et_pb_heading et_pb_module et_flex_module preset--group--divi-heading--divi-box-shadow--default preset--group--divi-heading--divi-sizing--default\"><div class=\"et_pb_heading_container\"><h3 class=\"et_pb_module_header\">Thema<\/h3><\/div><\/div>\n\n<div class=\"et_pb_text_3 et_pb_text et_pb_bg_layout_light et_pb_module et_flex_module preset--group--divi-text--divi-box-shadow--default preset--group--divi-text--divi-sizing--default preset--module--divi-text--default\"><div class=\"et_pb_text_inner\"><p>Du hast nicht ausreichende Berechtigungen, um auf diesen Inhalt zuzugreifen.<\/p>\n<\/div><\/div>\n\n<div class=\"et_pb_heading_1 et_pb_heading et_pb_module et_flex_module preset--group--divi-heading--divi-box-shadow--default preset--group--divi-heading--divi-sizing--default\"><div class=\"et_pb_heading_container\"><h3 class=\"et_pb_module_header\">Zusammenfassung<\/h3><\/div><\/div>\n\n<div class=\"et_pb_text_4 et_pb_text et_pb_bg_layout_light et_pb_module et_flex_module preset--group--divi-text--divi-box-shadow--default preset--group--divi-text--divi-sizing--default preset--module--divi-text--default\"><div class=\"et_pb_text_inner\"><p>Du hast nicht ausreichende Berechtigungen, um auf diesen Inhalt zuzugreifen.<\/p>\n<\/div><\/div>\n\n<div class=\"et_pb_text_5 et_pb_text et_pb_bg_layout_light et-interaction-target-6959wczed9 et_pb_module et_flex_module preset--group--divi-text--divi-box-shadow--default preset--group--divi-text--divi-sizing--default preset--module--divi-text--default\" data-interaction-trigger=\"spzv7w7k2t\" data-interaction-target=\"6959wczed9\"><div class=\"et_pb_text_inner\"><p>Du hast nicht ausreichende Berechtigungen, um auf diesen Inhalt zuzugreifen.<\/p>\n<\/div><\/div>\n<\/div>\n<\/div>\n<\/div>\n\n<div class=\"et_pb_section_2 et_pb_section et_section_regular et_flex_section preset--group--divi-section--divi-box-shadow--default preset--group--divi-section--divi-sizing--default preset--group--divi-section--divi-sizing--hsj9uxo--default\">\n<div class=\"et_pb_row_1 et_pb_row et_block_row preset--group--divi-row--divi-box-shadow--default preset--group--divi-row--divi-sizing--h1k452m--default\">\n<div class=\"et_pb_column_2 et_pb_column et_pb_column_4_4 et-last-child et_block_column et_pb_css_mix_blend_mode_passthrough preset--group--divi-column--divi-box-shadow--default preset--group--divi-column--divi-sizing--default preset--group--divi-column--divi-sizing--hsj9uxo--default\">\n<div class=\"et_pb_text_6 et_pb_text et_pb_bg_layout_light et_pb_module et_block_module preset--group--divi-text--divi-box-shadow--default preset--group--divi-text--divi-sizing--default preset--module--divi-text--default\"><div class=\"et_pb_text_inner\"><h3 style=\"text-align: right;\">\"The failures will be in cases where it's harder for a reader to notice \u2014 because they are more obscure.\"<\/h3>\n<\/div><\/div>\n\n<div class=\"et_pb_text_7 et_pb_text et_pb_bg_layout_light et_pb_module et_block_module preset--group--divi-text--divi-box-shadow--default preset--group--divi-text--divi-sizing--default preset--module--divi-text--dc7ce674-7a5e-4044-aef0-0aa8dfb88bdb\"><div class=\"et_pb_text_inner\"><p style=\"text-align: right;\"><span style=\"font-family: Open Sans;\">Emily M. Bender, University of Washington<\/span><\/p>\n<\/div><\/div>\n\n<div class=\"et_pb_text_8 et_pb_text et_pb_bg_layout_light et_pb_module et_flex_module preset--group--divi-text--divi-box-shadow--default preset--group--divi-text--divi-sizing--default preset--module--divi-text--default\"><div class=\"et_pb_text_inner\"><p>\u00a0The table below shows some information on hallucination levels across multiple topics. The results show that hallucinations are negatively correlated with domain knowledge - the more information is available, the less AI models confabulate.<\/p>\n<table class=\"data\">\n<thead>\n<tr>\n<th>Domain \/ Task<\/th>\n<th>Hallucination Rate<\/th>\n<th>Source<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Short-document summarization (top models)<\/td>\n<td>0.7\u20131.5%<\/td>\n<td>Vectara HHEM Leaderboard<\/td>\n<\/tr>\n<tr>\n<td>Citations on well-studied medical topic (major depression)<\/td>\n<td>~6%<\/td>\n<td>Deakin University, 2025<\/td>\n<\/tr>\n<tr>\n<td>Citations on less-studied medical topic (body dysmorphic disorder)<\/td>\n<td>~29%<\/td>\n<td>Deakin University, 2025<\/td>\n<\/tr>\n<tr>\n<td>Real-world conversational benchmark<\/td>\n<td>31.4%<\/td>\n<td>AuthenHallu, arXiv:2510.10539, 2025<\/td>\n<\/tr>\n<tr>\n<td>Purpose-built legal AI tools (Lexis+, Westlaw)<\/td>\n<td>17\u201334%<\/td>\n<td>Stanford RegLab \/ HAI, 2024<\/td>\n<\/tr>\n<tr>\n<td>General LLMs on specific legal queries<\/td>\n<td>58\u201388%<\/td>\n<td>Stanford \"Hallucinating Law\", 2024<\/td>\n<\/tr>\n<tr>\n<td>Code generation referencing non-existent libraries<\/td>\n<td>up to 99%<\/td>\n<td>LLM Hallucination Statistics, 2026<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p class=\"table-sources\">Sources: Vectara Hallucination Leaderboard (github.com\/vectara\/hallucination-leaderboard); Deakin University, ChatGPT citation accuracy in mental health literature reviews, 2025; Stanford HAI, \"Hallucinating Law\" and \"Hallucination-Free?\", 2024; AuthenHallu benchmark, arXiv:2510.10539, October 2025.<\/p>\n<div class=\"breakout-row\">\n<div class=\"plain\"><\/div>\n<\/div>\n<\/div><\/div>\n\n<div class=\"et_pb_heading_2 et_pb_heading et_pb_module et_flex_module preset--group--divi-heading--divi-box-shadow--default preset--group--divi-heading--divi-sizing--default\"><div class=\"et_pb_heading_container\"><h1 class=\"et_pb_module_header\">The core problem: How AI stores and retrieves<\/h1><\/div><\/div>\n<\/div>\n<\/div>\n\n<div class=\"et_pb_row_2 et_pb_row et_flex_row preset--group--divi-row--divi-box-shadow--default preset--group--divi-row--divi-sizing--h1k452m--default\">\n<div class=\"et_pb_column_3 et_pb_column et_flex_column et_pb_css_mix_blend_mode_passthrough et_flex_column_8_24 et_flex_column_8_24_tablet et_flex_column_24_24_phone et_flex_column_8_24_tabletWide preset--group--divi-column--divi-box-shadow--default preset--group--divi-column--divi-sizing--default preset--group--divi-column--divi-sizing--hsj9uxo--default\">\n<div class=\"et_pb_text_9 et_pb_text et_pb_bg_layout_light et_pb_module et_flex_module preset--group--divi-text--divi-box-shadow--default preset--group--divi-text--divi-sizing--default preset--module--divi-text--default\"><div class=\"et_pb_text_inner\"><p style=\"text-align: justify;\">The mechanism behind this relationship is not mysterious \u2014 it's directly baked into how language models process language in the first place. There is no actual recognition of content, just a very refined vector comparison that finds the nearest possible match. If there are many finds on a subject, \"near\" means that it is almost always correct. If there is limited source material, \"near\" can be too far away. Once we understand this fundamental concept of how a query actually moves through the system, the cumulation of hallucinations in less-covered subjects is a logical consequence.<\/p>\n<p style=\"text-align: justify;\">This all applies to us humans too. Each of us has areas of expertise where we can confidently state facts and rarely make mistakes and others where our knowledge is limited.<\/p>\n<p style=\"text-align: justify;\">\n<\/div><\/div>\n<\/div>\n\n<div class=\"et_pb_column_4 et_pb_column et_flex_column et_pb_css_mix_blend_mode_passthrough et_flex_column_16_24 et_flex_column_16_24_tablet et_flex_column_24_24_phone et_flex_column_16_24_tabletWide preset--group--divi-column--divi-box-shadow--default preset--group--divi-column--divi-sizing--default preset--group--divi-column--divi-sizing--hsj9uxo--default\">\n<div class=\"et_pb_text_10 et_pb_text et_pb_bg_layout_light et_pb_module et_flex_module preset--group--divi-text--divi-box-shadow--default preset--group--divi-text--divi-sizing--default preset--module--divi-text--default\"><div class=\"et_pb_text_inner\"><h4 style=\"text-align: justify;\">How Language Models see text<\/h4>\n<p style=\"text-align: justify;\">Text never enters or leaves a language model as text. It gets broken into tokens \u2014 subword fragments, each mapped to an integer ID \u2014 and each token is then converted into a high-dimensional vector, typically 1,000 to 10,000 numbers long. That vector is everything the model \"knows\". Training adjusts billions of parameters so that tokens appearing in similar contexts end up with mathematically similar vectors, and so the model can predict which vector is most likely to come next.<\/p>\n<p style=\"text-align: justify;\">At no point does the system check anything against truth. It checks proximity. The sentences \"penicillin was discovered by Alexander Fleming\" and \"penicillin was discovered by Alexander Flemming\" are nearly identical as vectors \u2014 the model outputs whichever pattern the training data made statistically more likely., not whichever is factually correct. Where training data is thin, the nearest statistical neighbor is often too far away to represent a fact, and the model presents the wrong answer dressed in the right shape - with confidence.<\/p>\n<\/div><\/div>\n<\/div>\n\n<div class=\"et_pb_column_5 et_pb_column et_flex_column et_pb_css_mix_blend_mode_passthrough et_flex_column_16_24 et_flex_column_16_24_tablet et_flex_column_24_24_phone et_flex_column_16_24_tabletWide preset--group--divi-column--divi-box-shadow--default preset--group--divi-column--divi-sizing--default preset--group--divi-column--divi-sizing--hsj9uxo--default\">\n<div class=\"et_pb_text_11 et_pb_text et_pb_bg_layout_light et_pb_module et_flex_module preset--group--divi-text--divi-box-shadow--default preset--group--divi-text--divi-sizing--default preset--module--divi-text--default\">\n<div class=\"et_pb_text_12 et_pb_text et_pb_bg_layout_light et_pb_module et_flex_module preset--group--divi-text--divi-box-shadow--default preset--group--divi-text--divi-sizing--default preset--group--divi-text--divi-font-body--h1yjkjr--7p5s44libg preset--module--divi-text--default\"><div class=\"et_pb_text_inner\"><p style=\"text-align: justify;\">The key difference is the confidence with which uncertainty is presented.\u00a0 For AI systems, confidence is baked into their generation logic, at it is not connected to the validity of the content they are presenting. They sound authoritative and certain when making things up.<\/p>\n<p style=\"text-align: justify;\">This confidence mismatch compounds the problem. Humans use linguistic confidence as a heuristic for reliability \u2014 nervous, hedging speech signals uncertainty; fluent assertion signals knowledge. Language models break this cue. They speak with uniform fluency whether they're on solid ground or free-associating from fragments, and the signal humans rely on to gauge speaker reliability is, for practical purposes, absent from their output. And most LLM companies decided that users don't like a \"sorry, I can't confidently answer that question\" and instead let their systems get away with low confidence answers.<\/p>\n<\/div><\/div>\n<\/div>\n<\/div>\n\n<div class=\"et_pb_column_6 et_pb_column et-last-child et_flex_column et_pb_css_mix_blend_mode_passthrough et_flex_column_8_24 et_flex_column_8_24_tablet et_flex_column_24_24_phone et_flex_column_8_24_tabletWide preset--group--divi-column--divi-box-shadow--default preset--group--divi-column--divi-sizing--default preset--group--divi-column--divi-sizing--hsj9uxo--default\">\n<div class=\"et_pb_image_0 et_pb_image et_pb_module et_flex_module preset--group--divi-image--divi-box-shadow--hr81m0w--default preset--group--divi-image--divi-sizing--hsj9uxo--default\"><span class=\"et_pb_image_wrap\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.9senses.net\/wp-content\/uploads\/2026\/04\/2026_Brett-Jordan_CrypticWriting-scaled.jpg\" alt=\"Image by Brett Jordan on Unsplash.com\" title=\"2026_Brett Jordan_CrypticWriting\" width=\"2560\" height=\"1920\" srcset=\"https:\/\/www.9senses.net\/wp-content\/uploads\/2026\/04\/2026_Brett-Jordan_CrypticWriting-scaled.jpg 2560w, https:\/\/www.9senses.net\/wp-content\/uploads\/2026\/04\/2026_Brett-Jordan_CrypticWriting-1280x960.jpg 1280w, https:\/\/www.9senses.net\/wp-content\/uploads\/2026\/04\/2026_Brett-Jordan_CrypticWriting-980x735.jpg 980w, https:\/\/www.9senses.net\/wp-content\/uploads\/2026\/04\/2026_Brett-Jordan_CrypticWriting-480x360.jpg 480w\" sizes=\"(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 2560px, 100vw\" class=\"wp-image-224948\" \/><\/span><\/div>\n<\/div>\n<\/div>\n\n<div class=\"et_pb_row_3 et_pb_row et_flex_row preset--group--divi-row--divi-box-shadow--default preset--group--divi-row--divi-sizing--h1k452m--default\">\n<div class=\"et_pb_column_7 et_pb_column et-last-child et_flex_column et_pb_css_mix_blend_mode_passthrough et_flex_column_24_24 et_flex_column_24_24_tablet et_flex_column_24_24_phone et_flex_column_24_24_tabletWide preset--group--divi-column--divi-box-shadow--default preset--group--divi-column--divi-sizing--default preset--group--divi-column--divi-sizing--hsj9uxo--default\">\n<div class=\"et_pb_heading_3 et_pb_heading et_pb_module et_flex_module preset--group--divi-heading--divi-box-shadow--default preset--group--divi-heading--divi-sizing--default\"><div class=\"et_pb_heading_container\"><h1 class=\"et_pb_module_header\">Summary: Be skeptical where you need AI most<\/h1><\/div><\/div>\n<\/div>\n<\/div>\n\n<div class=\"et_pb_row_4 et_pb_row et_flex_row preset--group--divi-row--divi-box-shadow--default preset--group--divi-row--divi-sizing--h1k452m--default\">\n<div class=\"et_pb_column_8 et_pb_column et_flex_column et_pb_css_mix_blend_mode_passthrough et_flex_column_8_24 et_flex_column_8_24_tablet et_flex_column_24_24_phone et_flex_column_8_24_tabletWide preset--group--divi-column--divi-box-shadow--default preset--group--divi-column--divi-sizing--default preset--group--divi-column--divi-sizing--hsj9uxo--default\">\n<div class=\"et_pb_text_13 et_pb_text et_pb_bg_layout_light et_pb_module et_flex_module preset--group--divi-text--divi-box-shadow--default preset--group--divi-text--divi-sizing--default preset--group--divi-text--divi-font-body--h1yjkjr--7p5s44libg preset--module--divi-text--default\"><div class=\"et_pb_text_inner\"><div class=\"plain\">\n<p>Most users scale their trust in AI<span>\u00a0<\/span><em>with<\/em><span>\u00a0<\/span>their own confidence about a topic: I feel good about this answer, so it probably is good. The correct calibration is the opposite. The less familiar the territory, the more likely both that the model is extrapolating and that the user won't notice if it's wrong.<\/p>\n<p>Errors you can catch are on topics you can independently verify. The errors you can't catch are on topics you can't verify \u2014 the same topics where hallucinations cluster most densely. The two distributions don't just coincide. They reinforce each other.<\/p>\n<p>So essentially, when using AI, we have to revert our instinctive reaction to \"trust the expert\" - the one who presents facts about a topic less known to us with confidence. In case of AI, it typically is mirroring our own confidence levels.<\/p>\n<\/div>\n<div class=\"breakout-box\"><\/div>\n<\/div><\/div>\n<\/div>\n\n<div class=\"et_pb_column_9 et_pb_column et_flex_column et_pb_css_mix_blend_mode_passthrough et_flex_column_16_24 et_flex_column_16_24_tablet et_flex_column_24_24_phone et_flex_column_16_24_tabletWide preset--group--divi-column--divi-box-shadow--default preset--group--divi-column--divi-sizing--default preset--group--divi-column--divi-sizing--hsj9uxo--default\">\n<div class=\"et_pb_text_14 et_pb_text et_pb_bg_layout_light et_pb_module et_flex_module preset--group--divi-text--divi-box-shadow--default preset--group--divi-text--divi-sizing--default preset--module--divi-text--default\"><div class=\"et_pb_text_inner\"><h4 style=\"text-align: justify;\">How to responsibly use AI<\/h4>\n<p style=\"text-align: justify;\">There are two different takeaways from the fact that hallucinations are increasing in less-documented areas - they differ by use case.<\/p>\n<p style=\"text-align: justify;\">For developers of retrieval systems that solve specific problems a key rule must be to always err towards caution when stating answers. When vector data suggests low similarity, drop the answer or state explicitly that the confidence iin its correctness s low.<\/p>\n<p style=\"text-align: justify;\">For most of us whojust use Large Language Models in their publicly available form: we need to retrain our brain in how we treat answers provided by those models. And for companies whose employees use AI, be it in an officially approved way or just quietly, training is the essential answer to the problem.<\/p>\n<\/div><\/div>\n<\/div>\n\n<div class=\"et_pb_column_10 et_pb_column et_flex_column et_pb_column_empty et_pb_css_mix_blend_mode_passthrough et_flex_column_16_24 et_flex_column_16_24_tablet et_flex_column_24_24_phone et_flex_column_16_24_tabletWide preset--group--divi-column--divi-box-shadow--default preset--group--divi-column--divi-sizing--default preset--group--divi-column--divi-sizing--hsj9uxo--default\"><\/div>\n\n<div class=\"et_pb_column_11 et_pb_column et-last-child et_flex_column et_pb_column_empty et_pb_css_mix_blend_mode_passthrough et_flex_column_8_24 et_flex_column_8_24_tablet et_flex_column_24_24_phone et_flex_column_8_24_tabletWide preset--group--divi-column--divi-box-shadow--default preset--group--divi-column--divi-sizing--default preset--group--divi-column--divi-sizing--hsj9uxo--default\"><\/div>\n<\/div>\n<\/div>\n\n<div class=\"et_pb_section_3 et_pb_section et_section_regular et_flex_section preset--group--divi-section--divi-box-shadow--default preset--group--divi-section--divi-sizing--default preset--group--divi-section--divi-sizing--hsj9uxo--default\"><\/div>","protected":false},"excerpt":{"rendered":"<p>AI hallucinations aren&#8217;t random \u2014 they cluster, systematically and predictably, in the topics you cannot independently verify.<\/p>","protected":false},"author":1,"featured_media":225005,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[47,1],"tags":[],"class_list":["post-224996","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog","category-uncategorized"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.9senses.net\/de\/wp-json\/wp\/v2\/posts\/224996","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.9senses.net\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.9senses.net\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.9senses.net\/de\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.9senses.net\/de\/wp-json\/wp\/v2\/comments?post=224996"}],"version-history":[{"count":24,"href":"https:\/\/www.9senses.net\/de\/wp-json\/wp\/v2\/posts\/224996\/revisions"}],"predecessor-version":[{"id":225023,"href":"https:\/\/www.9senses.net\/de\/wp-json\/wp\/v2\/posts\/224996\/revisions\/225023"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.9senses.net\/de\/wp-json\/wp\/v2\/media\/225005"}],"wp:attachment":[{"href":"https:\/\/www.9senses.net\/de\/wp-json\/wp\/v2\/media?parent=224996"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.9senses.net\/de\/wp-json\/wp\/v2\/categories?post=224996"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.9senses.net\/de\/wp-json\/wp\/v2\/tags?post=224996"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}