{"id":889,"date":"2026-01-19T06:30:16","date_gmt":"2026-01-18T21:30:16","guid":{"rendered":"https:\/\/itexplore.org\/jp\/columns\/ai-research-advances-hierarchical-modeling-continual-learning-biological-networks\/"},"modified":"2026-01-19T06:30:16","modified_gmt":"2026-01-18T21:30:16","slug":"ai-research-advances-hierarchical-modeling-continual-learning-biological-networks","status":"publish","type":"post","link":"https:\/\/itexplore.org\/jp\/columns\/ai-research-advances-hierarchical-modeling-continual-learning-biological-networks\/","title":{"rendered":"AI\u7814\u7a76\u306e\u6700\u65b0\u52d5\u5411\uff1a\u968e\u5c64\u7684\u30e2\u30c7\u30ea\u30f3\u30b0\u3001\u7d99\u7d9a\u5b66\u7fd2\u3001\u751f\u7269\u5b66\u7684\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af"},"content":{"rendered":"<p>\u672c\u65e5\u306e\u6ce8\u76eeAI\u30fb\u30c6\u30c3\u30af\u30cb\u30e5\u30fc\u30b9\u3092\u3001\u5c02\u9580\u7684\u306a\u5206\u6790\u3068\u5171\u306b\u304a\u5c4a\u3051\u3057\u307e\u3059\u3002<\/p>\n<div class=\"wp-block-vk-blocks-alert vk_alert alert alert-warning has-alert-icon\">\n<div class=\"vk_alert_icon\">\n<div class=\"vk_alert_icon_icon\"><i class=\"fa-solid fa-triangle-exclamation\" aria-hidden=\"true\"><\/i><\/div>\n<div class=\"vk_alert_icon_text\"><span>Warning<\/span><\/div>\n<\/div>\n<div class=\"vk_alert_content\">\n<p>\u3053\u306e\u8a18\u4e8b\u306fAI\u306b\u3088\u3063\u3066\u81ea\u52d5\u751f\u6210\u30fb\u5206\u6790\u3055\u308c\u305f\u3082\u306e\u3067\u3059\u3002AI\u306e\u6027\u8cea\u4e0a\u3001\u4e8b\u5b9f\u8aa4\u8a8d\u304c\u542b\u307e\u308c\u308b\u53ef\u80fd\u6027\u304c\u3042\u308b\u305f\u3081\u3001\u91cd\u8981\u306a\u5224\u65ad\u3092\u4e0b\u3059\u969b\u306f\u5fc5\u305a\u30ea\u30f3\u30af\u5148\u306e\u4e00\u6b21\u30bd\u30fc\u30b9\u3092\u3054\u78ba\u8a8d\u304f\u3060\u3055\u3044\u3002<\/p>\n<\/div>\n<\/div>\n<div class=\"wp-block-group\" style=\"margin-top:40px;margin-bottom:40px\">\n<h2 class=\"wp-block-heading\">PHOTON: Lightspeed\u304b\u3064\u30e1\u30e2\u30ea\u52b9\u7387\u306e\u826f\u3044\u8a00\u8a9e\u751f\u6210\u306e\u305f\u3081\u306e\u968e\u5c64\u7684\u81ea\u5df1\u56de\u5e30\u30e2\u30c7\u30ea\u30f3\u30b0<\/h2>\n<ul>\n<li><strong>\u539f\u984c:<\/strong> PHOTON: Hierarchical Autoregressive Modeling for Lightspeed and Memory-Efficient Language Generation<\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\">\u5c02\u9580\u30a2\u30ca\u30ea\u30b9\u30c8\u306e\u5206\u6790<\/h3>\n<div class=\"ai-summary-content\">\n<p><strong>PHOTON<\/strong>\u306f\u3001<strong>Transformer<\/strong>\u30e2\u30c7\u30eb\u306e\u6c34\u5e73\u7684\u306a\u30c8\u30fc\u30af\u30f3\u3054\u3068\u306e\u51e6\u7406\u306b\u4ee3\u308f\u308b\u3001\u5782\u76f4\u7684\u304b\u3064\u30de\u30eb\u30c1\u89e3\u50cf\u5ea6\u306e\u30b3\u30f3\u30c6\u30ad\u30b9\u30c8\u30b9\u30ad\u30e3\u30f3\u3092\u63a1\u7528\u3057\u305f\u968e\u5c64\u7684\u81ea\u5df1\u56de\u5e30\u30e2\u30c7\u30eb\u3067\u3059\u3002<\/p>\n<p>\u3053\u306e\u30e2\u30c7\u30eb\u306f\u3001\u30c8\u30fc\u30af\u30f3\u3092\u4f4e\u30ec\u30fc\u30c8\u306e\u30b3\u30f3\u30c6\u30ad\u30b9\u30c8\u72b6\u614b\u306b\u5727\u7e2e\u3059\u308b\u30dc\u30c8\u30e0\u30a2\u30c3\u30d7\u30a8\u30f3\u30b3\u30fc\u30c0\u30fc\u3068\u3001\u30c8\u30fc\u30af\u30f3\u8868\u73fe\u3092\u4e26\u5217\u306b\u518d\u69cb\u7bc9\u3059\u308b\u8efd\u91cf\u306a\u30c8\u30c3\u30d7\u30c0\u30a6\u30f3\u30c7\u30b3\u30fc\u30c0\u30fc\u304b\u3089\u306a\u308b\u968e\u5c64\u69cb\u9020\u3092\u6301\u3061\u307e\u3059\u3002\u3055\u3089\u306b\u3001\u518d\u5e30\u7684\u751f\u6210\u306b\u3088\u308a\u3001\u6700\u3082\u7c97\u3044\u30ec\u30d9\u30eb\u306e\u6f5c\u5728\u30b9\u30c8\u30ea\u30fc\u30e0\u306e\u307f\u3092\u66f4\u65b0\u3057\u3001\u30dc\u30c8\u30e0\u30a2\u30c3\u30d7\u306e\u518d\u30a8\u30f3\u30b3\u30fc\u30c9\u3092\u6392\u9664\u3057\u307e\u3059\u3002<\/p>\n<p>\u5b9f\u9a13\u306e\u7d50\u679c\u3001<strong>PHOTON<\/strong>\u306f\u3001\u7279\u306b\u9577\u6587\u30b3\u30f3\u30c6\u30ad\u30b9\u30c8\u3084\u30de\u30eb\u30c1\u30af\u30a8\u30ea\u30bf\u30b9\u30af\u306b\u304a\u3044\u3066\u3001\u30b9\u30eb\u30fc\u30d7\u30c3\u30c8\u3068\u54c1\u8cea\u306e\u30c8\u30ec\u30fc\u30c9\u30aa\u30d5\u3067\u7af6\u5408\u3059\u308b<strong>Transformer<\/strong>\u30d9\u30fc\u30b9\u306e\u8a00\u8a9e\u30e2\u30c7\u30eb\u3088\u308a\u3082\u512a\u308c\u3066\u304a\u308a\u3001\u30c7\u30b3\u30fc\u30c9\u6642\u306eKV\u30ad\u30e3\u30c3\u30b7\u30e5\u30c8\u30e9\u30d5\u30a3\u30c3\u30af\u3092\u524a\u6e1b\u3057\u3001\u30e1\u30e2\u30ea\u3042\u305f\u308a\u306e\u30b9\u30eb\u30fc\u30d7\u30c3\u30c8\u3092\u6700\u59271000\u500d\u5411\u4e0a\u3055\u305b\u307e\u3059\u3002<\/p>\n<\/div>\n<p>\ud83d\udc49 <strong><a href=\"https:\/\/arxiv.org\/abs\/2512.20687\" target=\"_blank\" rel=\"noopener\">arXiv \u3067\u8a18\u4e8b\u5168\u6587\u3092\u8aad\u3080<\/a><\/strong><\/p>\n<ul>\n<li><strong>\u8981\u70b9:<\/strong> PHOTON offers a memory-efficient and high-throughput alternative to Transformers for language generation by employing a hierarchical autoregressive approach.<\/li>\n<li><strong>\u8457\u8005:<\/strong> Yuma Ichikawa, Naoya Takagi, Takumi Nakagawa, Yuzi Kanazawa, Akira Sakai<\/li>\n<\/ul>\n<blockquote class=\"wp-block-quote\"><p><span>English Summary:<\/span><\/p>\n<p><strong>PHOTON<\/strong> (Parallel Hierarchical Operation for TOp-down Networks) is a hierarchical autoregressive model that replaces the horizontal token-by-token scanning of <strong>Transformers<\/strong> with vertical, multi-resolution context scanning.<\/p>\n<p>The model features a hierarchy of latent streams: a bottom-up encoder compresses tokens into low-rate contextual states, while lightweight top-down decoders reconstruct fine-grained token representations in parallel. It also introduces recursive generation, which updates only the coarsest latent stream and eliminates bottom-up re-encoding.<\/p>\n<p>Experimental results show that <strong>PHOTON<\/strong> outperforms competitive <strong>Transformer<\/strong>-based language models in the throughput-quality trade-off, particularly in long-context and multi-query tasks. It reduces decode-time KV-cache traffic, yielding up to 1000x higher throughput per unit memory.<\/p>\n<\/blockquote>\n<\/div>\n<div class=\"wp-block-group\" style=\"margin-top:40px;margin-bottom:40px\">\n<h2 class=\"wp-block-heading\">\u8868\u73fe\u30c9\u30ea\u30d5\u30c8\u3092\u4f34\u3046\u7d99\u7d9a\u5b66\u7fd2<\/h2>\n<ul>\n<li><strong>\u539f\u984c:<\/strong> Learning continually with representational drift<\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\">\u5c02\u9580\u30a2\u30ca\u30ea\u30b9\u30c8\u306e\u5206\u6790<\/h3>\n<div class=\"ai-summary-content\">\n<p>\u6df1\u5c64\u4eba\u5de5\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306f\u3001\u975e\u5b9a\u5e38\u306a\u30c7\u30fc\u30bf\u30b9\u30c8\u30ea\u30fc\u30e0\u304b\u3089\u306e\u5b66\u7fd2\u306b\u82e6\u52b4\u3057\u3066\u304a\u308a\u3001\u7d99\u7d9a\u5b66\u7fd2\u3067\u306f\u904e\u53bb\u306e\u30bf\u30b9\u30af\u306e\u5fd8\u5374\u3084\u53ef\u5851\u6027\u306e\u55aa\u5931\u304c\u8ab2\u984c\u3068\u306a\u308a\u307e\u3059\u3002<\/p>\n<p>\u73fe\u5728\u306e\u7d99\u7d9a\u5b66\u7fd2\u30a2\u30d7\u30ed\u30fc\u30c1\u306f\u3001\u904e\u53bb\u306e\u30bf\u30b9\u30af\u306e\u8868\u73fe\u306e\u5b89\u5b9a\u6027\u3092\u9ad8\u3081\u308b\u304b\u3001\u5c06\u6765\u306e\u5b66\u7fd2\u306e\u305f\u3081\u306e\u53ef\u5851\u6027\u3092\u4fc3\u9032\u3059\u308b\u3053\u3068\u306b\u7126\u70b9\u3092\u5f53\u3066\u3066\u304d\u307e\u3057\u305f\u3002\u3057\u304b\u3057\u3001\u52d5\u7269\u306e\u8133\u3067\u306f\u3001\u5b89\u5b9a\u3057\u305f\u884c\u52d5\u306b\u95a2\u9023\u3059\u308b\u5fdc\u7b54\u304c\u6642\u9593\u3068\u3068\u3082\u306b\u5909\u5316\u3059\u308b\u8868\u73fe\u30c9\u30ea\u30d5\u30c8\u304c\u898b\u3089\u308c\u3001\u3053\u308c\u306f\u751f\u7269\u5b66\u7684\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u304c\u7d99\u7d9a\u7684\u306b\u5b66\u7fd2\u3059\u308b\u4e0a\u3067\u306e\u91cd\u8981\u306a\u7279\u6027\u3067\u3042\u308b\u53ef\u80fd\u6027\u304c\u793a\u5506\u3055\u308c\u3066\u3044\u307e\u3059\u3002<\/p>\n<p>\u672c\u7814\u7a76\u3067\u306f\u3001\u8868\u73fe\u30c9\u30ea\u30d5\u30c8\u3092\u7d99\u7d9a\u5b66\u7fd2\u3068\u7d50\u3073\u3064\u3051\u308b\u3053\u3068\u3067\u3001\u4eba\u5de5\u30b7\u30b9\u30c6\u30e0\u306b\u60c5\u5831\u3092\u63d0\u4f9b\u3067\u304d\u308b\u53ef\u80fd\u6027\u3092\u63a2\u6c42\u3057\u307e\u3059\u3002\u30c9\u30ea\u30d5\u30c8\u306f\u3001\u6052\u5e38\u6027\u30bf\u30fc\u30f3\u30aa\u30fc\u30d0\u30fc\u3068\u5b66\u7fd2\u95a2\u9023\u306e\u30b7\u30ca\u30d7\u30b9\u53ef\u5851\u6027\u306e\u6df7\u5408\u3092\u53cd\u6620\u3057\u3066\u3044\u308b\u53ef\u80fd\u6027\u304c\u3042\u308a\u3001\u4eba\u5de5\u30b7\u30b9\u30c6\u30e0\u306b\u304a\u3051\u308b\u7d99\u7d9a\u5b66\u7fd2\u306e\u30a2\u30d7\u30ed\u30fc\u30c1\u3092\u6539\u5584\u3059\u308b\u624b\u304c\u304b\u308a\u3068\u306a\u308b\u304b\u3082\u3057\u308c\u307e\u305b\u3093\u3002<\/p>\n<\/div>\n<p>\ud83d\udc49 <strong><a href=\"https:\/\/arxiv.org\/abs\/2512.22045\" target=\"_blank\" rel=\"noopener\">arXiv \u3067\u8a18\u4e8b\u5168\u6587\u3092\u8aad\u3080<\/a><\/strong><\/p>\n<ul>\n<li><strong>\u8981\u70b9:<\/strong> Representational drift, observed in biological systems, could offer new perspectives for developing more effective continual learning strategies in artificial neural networks.<\/li>\n<li><strong>\u8457\u8005:<\/strong> Suzanne van der Veldt, Gido M. van de Ven, Sanne Moorman, Guillaume Etter<\/li>\n<\/ul>\n<blockquote class=\"wp-block-quote\"><p><span>English Summary:<\/span><\/p>\n<p>Deep artificial neural networks struggle with learning from non-stationary data streams, leading to forgetting and loss of plasticity in continual learning scenarios.<\/p>\n<p>Current continual learning approaches have focused on either stabilizing representations of past tasks or promoting plasticity for future learning. However, biological neural networks exhibit representational drift, where responses associated with stable behaviors gradually change over time, suggesting this might be a key property for continual learning in biological systems.<\/p>\n<p>This research explores how linking representational drift to continual learning could inform artificial systems. Drift may reflect a mixture of homeostatic turnover and learning-related synaptic plasticity, potentially offering insights for improving continual learning approaches in artificial systems.<\/p>\n<\/blockquote>\n<\/div>\n<div class=\"wp-block-group\" style=\"margin-top:40px;margin-bottom:40px\">\n<h2 class=\"wp-block-heading\">\u8aa4\u5dee\u9006\u4f1d\u64ad\u3092\u7528\u3044\u305a\u306b\u968e\u5c64\u7684\u7279\u5fb4\u3092\u5b66\u7fd2\u3059\u308b\u751f\u7269\u5b66\u7684\u30a4\u30f3\u30b9\u30d4\u30ec\u30fc\u30b7\u30e7\u30f3\u3092\u53d7\u3051\u305f\u6574\u6d41\u30b9\u30da\u30af\u30c8\u30eb\u30e6\u30cb\u30c3\u30c8\uff08ReSU\uff09\u306e\u30cd\u30c3\u30c8\u30ef\u30fc\u30af<\/h2>\n<ul>\n<li><strong>\u539f\u984c:<\/strong> A Network of Biologically Inspired Rectified Spectral Units (ReSUs) Learns Hierarchical Features Without Error Backpropagation<\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\">\u5c02\u9580\u30a2\u30ca\u30ea\u30b9\u30c8\u306e\u5206\u6790<\/h3>\n<div class=\"ai-summary-content\">\n<p>\u672c\u7814\u7a76\u3067\u306f\u3001\u6574\u6d41\u30b9\u30da\u30af\u30c8\u30eb\u30e6\u30cb\u30c3\u30c8\uff08<strong>ReSU<\/strong>\uff09\u304b\u3089\u306a\u308b\u751f\u7269\u5b66\u7684\u306b\u7740\u60f3\u3092\u5f97\u305f\u591a\u5c64\u30cb\u30e5\u30fc\u30e9\u30eb\u30a2\u30fc\u30ad\u30c6\u30af\u30c1\u30e3\u3092\u5c0e\u5165\u3057\u3066\u3044\u307e\u3059\u3002\u5404<strong>ReSU<\/strong>\u306f\u3001\u904e\u53bb\u3068\u73fe\u5728\u306e\u5165\u529b\u30da\u30a2\u306e\u5171\u5206\u6563\u89e3\u6790\uff08CCA\uff09\u306b\u3088\u3063\u3066\u5f97\u3089\u308c\u305f\u6b63\u898f\u76f4\u4ea4\u57fa\u5e95\u306b\u3001\u5165\u529b\u5c65\u6b74\u306e\u6700\u8fd1\u306e\u30a6\u30a3\u30f3\u30c9\u30a6\u3092\u5c04\u5f71\u3057\u3001\u305d\u306e\u6b63\u307e\u305f\u306f\u8ca0\u306e\u6210\u5206\u3092\u6574\u6d41\u3057\u307e\u3059\u3002<\/p>\n<p>\u30b7\u30ca\u30d7\u30b9\u7d50\u5408\u3068\u6642\u9593\u30d5\u30a3\u30eb\u30bf\u30fc\u306b\u6b63\u898f\u76f4\u4ea4\u57fa\u5e95\u3092\u30a8\u30f3\u30b3\u30fc\u30c9\u3059\u308b\u3053\u3068\u3067\u3001<strong>ReSU<\/strong>\u306f\u5c40\u6240\u7684\u304b\u3064\u81ea\u5df1\u6559\u5e2b\u3042\u308a\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u5b9f\u88c5\u3057\u3001\u3088\u308a\u8907\u96d1\u306a\u7279\u5fb4\u3092\u6bb5\u968e\u7684\u306b\u69cb\u7bc9\u3057\u307e\u3059\u3002\u3053\u306e\u30a2\u30d7\u30ed\u30fc\u30c1\u306f\u3001\u8aa4\u5dee\u9006\u4f1d\u64ad\u306b\u4f9d\u5b58\u3057\u306a\u3044\u3001\u751f\u7269\u5b66\u7684\u306b\u6839\u62e0\u306e\u3042\u308b\u6df1\u5c64\u81ea\u5df1\u6559\u5e2b\u3042\u308a\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306e\u69cb\u7bc9\u30d1\u30e9\u30c0\u30a4\u30e0\u3092\u63d0\u4f9b\u3057\u307e\u3059\u3002<\/p>\n<p>2\u5c64\u306e<strong>ReSU<\/strong>\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3092\u81ea\u7136\u30b7\u30fc\u30f3\u306e\u7ffb\u8a33\u30bf\u30b9\u30af\u3067\u81ea\u5df1\u6559\u5e2b\u3042\u308a\u5b66\u7fd2\u3055\u305b\u305f\u7d50\u679c\u3001\u7b2c\u4e00\u5c64\u306e\u30e6\u30cb\u30c3\u30c8\u306f\u30b7\u30e7\u30a6\u30b8\u30e7\u30a6\u30d0\u30a8\u306e\u8996\u7d30\u80de\u5f8c\u30cb\u30e5\u30fc\u30ed\u30f3\u306b\u985e\u4f3c\u3057\u305f\u6642\u9593\u30d5\u30a3\u30eb\u30bf\u30fc\u3092\u767a\u9054\u3055\u305b\u3001\u7b2c\u4e8c\u5c64\u306e\u30e6\u30cb\u30c3\u30c8\u306f\u65b9\u5411\u9078\u629e\u6027\u3092\u6301\u3064\u3088\u3046\u306b\u306a\u308a\u307e\u3057\u305f\u3002\u3053\u308c\u306f\u3001\u751f\u7269\u5b66\u7684\u611f\u899a\u56de\u8def\u306e\u30e2\u30c7\u30ea\u30f3\u30b0\u306b\u304a\u3044\u3066<strong>ReSU<\/strong>\u304c\u6709\u671b\u3067\u3042\u308b\u3053\u3068\u3092\u793a\u5506\u3057\u3066\u3044\u307e\u3059\u3002<\/p>\n<\/div>\n<p>\ud83d\udc49 <strong><a href=\"https:\/\/arxiv.org\/abs\/2512.23146\" target=\"_blank\" rel=\"noopener\">arXiv \u3067\u8a18\u4e8b\u5168\u6587\u3092\u8aad\u3080<\/a><\/strong><\/p>\n<ul>\n<li><strong>\u8981\u70b9:<\/strong> Rectified Spectral Units (ReSUs) offer a biologically plausible, backpropagation-free method for learning hierarchical features in deep neural networks through self-supervision.<\/li>\n<li><strong>\u8457\u8005:<\/strong> Shanshan Qin, Joshua L. Pughe-Sanford, Alexander Genkin, Pembe Gizem Ozdil, Philip Greengard, Anirvan M. Sengupta, Dmitri B. Chklovskii<\/li>\n<\/ul>\n<blockquote class=\"wp-block-quote\"><p><span>English Summary:<\/span><\/p>\n<p>This paper introduces a biologically inspired, multilayer neural architecture composed of Rectified Spectral Units (<strong>ReSUs<\/strong>). Each <strong>ReSU<\/strong> projects a recent window of its input history onto canonical directions obtained via canonical correlation analysis (CCA) of past-future input pairs, and then rectifies either its positive or negative component.<\/p>\n<p>By encoding canonical directions in synaptic weights and temporal filters, <strong>ReSUs<\/strong> implement a local, self-supervised algorithm for progressively constructing increasingly complex features. This offers a biologically grounded, backpropagation-free paradigm for constructing deep self-supervised neural networks.<\/p>\n<p>When a two-layer <strong>ReSU<\/strong> network was trained in a self-supervised regime on translating natural scenes, first-layer units developed temporal filters resembling those of Drosophila post-photoreceptor neurons, and second-layer units became direction-selective, suggesting <strong>ReSUs<\/strong> are promising for modeling biological sensory circuits.<\/p>\n<\/blockquote>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>AI\u306e\u6700\u65b0\u7814\u7a76\u3092\u8981\u7d04\u3002\u968e\u5c64\u7684\u81ea\u5df1\u56de\u5e30\u30e2\u30c7\u30eb\u300cPHOTON\u300d\u3001\u7d99\u7d9a\u5b66\u7fd2\u306b\u304a\u3051\u308b\u8868\u73fe\u30c9\u30ea\u30d5\u30c8\u3001\u8aa4\u5dee\u9006\u4f1d\u64ad\u3092\u7528\u3044\u306a\u3044\u751f\u7269\u5b66\u7684\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306b\u3064\u3044\u3066\u89e3\u8aac\u3002<\/p>\n","protected":false},"author":1,"featured_media":849,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"vkexunit_cta_each_option":"","footnotes":""},"categories":[3],"tags":[16,64,15,92,37,91],"class_list":["post-889","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-columns","tag-llm","tag-64","tag-ai","tag-92","tag-37","tag-91"],"_links":{"self":[{"href":"https:\/\/itexplore.org\/jp\/wp-json\/wp\/v2\/posts\/889","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/itexplore.org\/jp\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/itexplore.org\/jp\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/itexplore.org\/jp\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/itexplore.org\/jp\/wp-json\/wp\/v2\/comments?post=889"}],"version-history":[{"count":0,"href":"https:\/\/itexplore.org\/jp\/wp-json\/wp\/v2\/posts\/889\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/itexplore.org\/jp\/wp-json\/wp\/v2\/media\/849"}],"wp:attachment":[{"href":"https:\/\/itexplore.org\/jp\/wp-json\/wp\/v2\/media?parent=889"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/itexplore.org\/jp\/wp-json\/wp\/v2\/categories?post=889"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/itexplore.org\/jp\/wp-json\/wp\/v2\/tags?post=889"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}