Japan’s Next Wave of Autonomy: Show-Stopping Tech from CES 2026

Japan kicked off CES in a big way with a presentation of some of their “showstopping” tech start ups.

By: Dawn Zoldi

At a packed CES 2026 pitch session co-hosted by the Japan External Trade Organization (JETRO) and ShowStoppers, a slate of Japanese startups showed how the country’s deep strengths in hardware, manufacturing and content converge with AI and autonomy. For the autonomy ecosystem (think: drones, robotics, and intelligent systems) three companies in particular stood out (SwipeVideo, Innovative Space Carrier and Tokoshie), while the others had an indirect nexus. Read on to learn about some of the start ups ready to power Japan’s autonomy industry and take the world by storm!

JETRO’s Three Big Tech Currents

Dawn Zoldi/P3 Tech Consulting
JETRO Executive Vice President Mio Kawada

JETRO Executive Vice President Mio Kawada opened the session. She framed Japan’s startup landscape around three intersecting currents: deep tech with practical applications, human‑centric innovation and AI as a fundamental enabler.

  1. Deep Tech with Practical Applications: Kawada highlighted companies ranging from Innovative Space Carrier’s autonomous microgravity labs to Tokoshie’s AI‑powered 3D printing platform and Pocket DR’s avatar system, all turning advanced R&D into commercially relevant solutions.
  2. Human‑Centric Innovation: This showed up in experiences like Gakugeki’s VR‑powered entertainment, Wiillow’s AI mental health support for K‑5 students, and SwipeVideo’s multi‑angle streaming that “democratizes” immersive viewing.
  3. AI as a Fundamental Enabler: Across the portfolio, AI has become an embedded infrastructure layer, from Qlay’s AI proctoring to generative AI woven into content, assessment and simulation platforms.

Kawada underscored that these trends build on Japan’s traditional strengths in precision hardware and manufacturing, now fused with software, services and globally resonant entertainment IP such as gaming and anime. For autonomy stakeholders, Japan treats AI and autonomy as cross‑cutting capabilities…not single‑sector niches.

SwipeVideo: Multi‑Angle Vision for Public Safety and Defense

Dawn Zoldi/P3 Tech Consulting
SwipeVideo’s Global Sales Director, Matthew Boyer

SwipeVideo’s Global Sales Director, Matthew Boyer, asked the room to imagine watching any sports or entertainment event from any angle, as if each viewer were their own broadcast producer. SwipeVideo delivers that through an interactive player that lets users choose freely among dozens of synchronized camera angles, live, at full resolution,and embeddable into web, mobile and TV apps without a VR headset.

Unlike traditional “free viewpoint” systems deployed at events like the Olympics, which produce fixed highlight clips with no user control and high production cost, SwipeVideo supports “limitless” angles. Boyer cited deployments with 27–30+ feeds, all while streaming live without buffering.

The system is camera‑agnostic. It can use event organizers’ existing multi‑camera rigs or additional cameras supplied and installed by SwipeVideo, all managed through a cloud architecture that Boyer described as the result of “three years cracking the code” on networking and optimization. For public‑safety and defense applications, that architecture maps almost directly onto multi‑sensor autonomy concepts.

For public‑safety and defense applications, that architecture can map almost directly onto multi‑sensor autonomy concepts. The multi‑angle video would be ideal to train policy officers on engagements, as SwipeVideo’s implementation offers significantly more camera views and higher fidelity than previous multi‑angle trials. Extending that to drones is straightforward. Drones can provide overhead feeds, stitched into the same interactive timeline as body‑worn, vehicle‑mounted or fixed cameras.

During disaster response efforts, such as wildfires, floods or earthquakes, a SwipeVideo‑style platform could synchronize aerial drone imagery with ground‑level video at evacuation centers, roadblocks and critical infrastructure to give incident commanders a time‑aligned, multi‑perspective replay for both live decision‑making and after‑action review.

In support of defense and security training, multi‑angle capture of live‑fire exercises, convoy operations, or shipboard drills, augmented with small drones overhead, could let analysts track specific individuals or vehicles across a scenario, with the same “eye‑tracking” style angle selection that Boyer demonstrated in a football use case.

Economically, Boyer noted that customers using SwipeVideo for live events see 1.2–1.7x revenue increases, driven by premium pricing for immersive access and user‑generated highlight clips that amplify engagement. That same willingness to pay for richer situational awareness and training content is already evident in defense and public‑safety markets, suggesting a viable path into autonomy‑heavy domains.

Innovative Space Carrier: Autonomous Labs in Microgravity

Dawn Zoldi/P3 Tech Consulting
Kei Shimada, Chief Business Officer of Innovative Space Carrier

Kei Shimada, Chief Business Officer of Innovative Space Carrier, shifted the discussion to orbital experimentation. He introduced “Gravity,” an autonomous microgravity lab designed as a payload that operates independently in orbit.

For autonomy and robotics, microgravity testing is fast becoming an operational imperative. Shimada pointed to testing adhesives and materials in orbit, where different fluid dynamics and density behavior change how structures age and fail. The resulting data informs design of autonomous spacecraft, on‑orbit robotics, and satellite servicing systems that must operate reliably in those conditions.

Shimada also cited potential experiments with 3D‑printed organs and drug delivery in microgravity, with findings that could feed into life‑support and medical‑autonomy systems for long‑duration missions or for space tourism, an area where Innovative Space Carrier already has 9,000 pre‑registrations, with a majority women. He described the very practical challenge of passengers who want to keep their shoes on in microgravity as an example of the thousands of design questions that demand orbital data. That is exactly the kind of input needed to build autonomous cabin systems (think: navigation, safety and environmental control) that will have to make decisions without real‑time human intervention.

But today most serious microgravity research takes place on the International Space Station (ISS), involving complex astronaut operations, multi‑year planning cycles and multi‑million‑dollar budgets. On the other end, parabolic flights can provide seconds of reduced gravity but at high cost and with very short experiment windows. Innovative Space Carrier positions Gravity as a middle path: a self‑contained experiment module that rides on third‑party satellites (12U–16U buses) and conducts controlled experiments in microgravity over 6–18 months, downlinking thermal, optical and environmental data without necessarily being recovered.

Market‑wise, Shimada cited a current microgravity R&D market of roughly 3.3 billion USD expected to double within four years, with strong demand in biotech, materials and manufacturing, plus consumer brands interested in “space‑tested” differentiation. The company is targeting experiments in the 3–5 million USD range and raising 6.5 million USD to move from an engineering model (TRL‑4) toward a planned first commercial launch around early 2028.

The autonomy link here is both direct and conceptual. Gravity itself is an autonomous lab, and the data it returns becomes training and validation fodder for the next generation of space‑based autonomous systems.

Tokoshie: Conversational CAD and the Next Generation of Autonomy Engineers
Dawn Zoldi/P3 Tech Consulting
Tokoshie’s founder and CEO, Tatsunori Watanabe

Tokoshie’s founder and CEO, Tatsunori Watanabe, delivered perhaps the most immediately tangible autonomy enabler: an AI‑native conversational CAD environment and 3D printing workflow branded “Tokoshie” ( a traditional Japanese word meaning “eternal” or “forever”).

The company’s mission is “to empower everyone to make anything,” by closing the gap between digital design and physical manufacturing.​ Watanabe cited a 2016 forecast that 3D printing would reach 400 billion dollars by 2025, while noting that the actual market remains around 19 billion. The industry, he said, fell short of expectations largely because design tools are too complex, expertise‑dependent and poorly connected to manufacturability. 

Tokoshie tackles that by turning natural language prompts and simple sketches into production‑ready mechanical designs, layered with automated checks for strength and printability and then driving in‑house 3D printers with minimal human intervention. Users describe what they need, or upload a sketch, and an AI‑powered design agent generates a manufacturable 3D model in seconds. It bundles design, simulation, print preparation and test logic into a single smart workflow.

Watanabe emphasized the company’s performance data from early deployments. Over 20,000 users in Japan, primarily robotics startups, R&D teams and university labs reported lead‑time reductions of more than 90 percent and cost reductions in the 70–80 percent range in some cases. 

For autonomous systems, that acceleration is critical. Drone airframes, sensor mounts, custom gimbals and small ground‑robot chassis all live or die on rapid iteration. A conversational CAD environment that encodes manufacturability and structural heuristics lets autonomy teams spin through far more design cycles before committing to tooling or certification.

Watanabe was also explicit that one of Tokoshie’s first missions is “reskilling” and enabling younger or less experienced users to work effectively with CAD, which he described as traditionally “very difficult” and high‑expertise. That directly strengthens the pipeline of future robotics and autonomy engineers, students who can go from concept to printed mechanism in days instead of semesters.

Tokoshie is already building AI “agents” that connect to multiple optimization engines, selecting good practices and viable structures based on a proprietary dataset. That approach mirrors how autonomy stacks are designed, ensembles of specialized models coordinated by a higher‑level planner, and could eventually converge with simulation‑in‑the‑loop testing for drones and robots.

Tokoshie’s business model blends software as a service (SaaS) subscriptions for the design environment with on‑demand manufacturing for printed parts. It targets a hardware and robotics market the company sizes in the hundreds of billions of dollars, with an initial focus on robotics startups, R&D units and university labs.

Other Standouts: AI, Avatars, Audio and Anime

While not all of the remaining companies had a direct autonomy nexus, several touched adjacent technologies that will shape how autonomous systems interact with humans and society.

Qlay (Tom Nakata, Co‑founder & CEO) is tackling AI‑assisted cheating in online tests and interviews through an AI proctoring platform that tracks eye movements, secondary devices and speech patterns to flag suspicious behavior. The tech uses a phone as a side‑angle camera and analytics engine. As AI co‑pilots enter more workflows, including autonomy operations centers, systems like Qlay’s will be part of maintaining trust in credentialing and remote hiring for highly sensitive roles.

Pocket DR (Shigeki Uchida, CTO) offers a compact avatar video booth that transforms up to four users into high‑quality 3D avatars in under a minute. This company is targeting theme parks, shopping malls and entertainment brands. As social robots and virtual agents become the face of autonomous services, the underlying avatar pipelines and safety‑controlled content libraries Pocket DR is building could become relevant.

Dawn Zoldi/P3 Tech Consulting
Daisuke Takazoe, CEO of Gakugeki

Verne Technologies (Daiki Takeuchi, Founder & CEO) introduced “Wearphone,” a mask‑style wearable with active and passive noise cancellation that creates a private, mobile voice booth while also providing direct voice interaction with AI systems. Private, reliable voice input is a prerequisite for many human‑in‑the‑loop autonomy operations, from command centers to field technicians working alongside robots.

Wiillow (Ryosuke Takenaka, Founder & CEO) is building an AI copilot for K‑12 counselors and teachers that turns student voice check‑ins, notes and teacher feedback into counseling plans and classroom activities, surfacing real‑time well‑being insights. As AI and autonomy permeate education, mental‑health aware systems like Wiillow’s could provide important context for any deployment of robotic tools in schools.

Gakugeki (Daisuke Takazoe, CEO) delivered one of the most purely fun demos: a VR‑driven anime experience that lets fans meet beloved characters face‑to‑face in immersive environments, using AI‑powered 3D stage direction and high‑quality Japanese IP. The same toolchains that choreograph characters and audiences in virtual space are directly relevant to social robotics, virtual telepresence and human‑robot interaction research

Individually, these companies span proctoring, avatars, audio, education and entertainment. Collectively, they reinforce Kawada’s frame that AI provides both an infrastructure and interface layer across various Japanese sectors.

Why This Matters for Autonomy

From multi‑angle video to microgravity labs and conversational CAD, the autonomy‑relevant startups in JETRO’s CES cohort compress the distance between complex physical realities and human decision‑making. SwipeVideo turns dense, multi‑sensor scenes into navigable experiences that could underpin better training and command and control (C2) for drone‑enabled operations. Innovative Space Carrier abstracts away the friction of orbital experiments which allows autonomous systems to be designed with real microgravity data rather than Earth‑bound approximations. Tokoshie lowers the barrier to designing, simulating, and printing new hardware, which invites a broader community to participate in building the next generation of autonomous platforms. For global readers of Autonomy Global, Japan’s “showstopping tech” at CES 2026 provided a preview of an integrated autonomy stack where content, hardware, space and human experience come together.