top of page
flowt.png

We’re living the fourth phase of the post-internet evolution.
This work sits inside those transitions — from pre-cloud systems to Physical AI, robotics, and deep-tech built at the edge — documenting how technology leaves the screen and reshapes space, objects, and work.

#PhysicalAI #Robotics #EdgeIntelligence #IndustrialReshoring

#SpatialComputing #AR/VR/XR #ImmersiveWorkspaces #xPresence

#IoT #SmartObjects #EmbeddedIntelligence #Edge/Fog

THE AI JUST LEFT THE SCREEN

#PhysicalAI #Robotics #EdgeIntelligence #IndustrialReshoring

Empowering Robotics Startups

What unites the projects in this section is the shift of AI from screens into the physical world — systems that act, sense, and adapt in situ. This trajectory began with early platforms and hands-on experimentation, including domestic and consumer robotics, and continues today through institutional programs (INNOVIT and SRI International, including the former PARC) and strategic advisory work shaping new generations of startups.

INNOVIT logo white.png
SRI logo white.png

Sea Robots: Marine grade A.I. (CPO)

I shape intelligent autonomy for an Italian yachting company by transforming certified navigation data, onboard trials, and decades of maritime measurements into operational AI for Sea Robots. Based in Palo Alto, I act as SailADV’s strategic center, aligning product vision, system architecture, and execution. This work led to D.gree V26 and the cognitive agent Sailly, integrating hardware, software, and AI into unified, mission-oriented systems.
Born at sea. Designed in Palo Alto. Made in Italy with care.

l02.png

Strategic product positioning & design leadership

Shaping SailADV as a dual company (Italy + Silicon Valley), aligning design thinking, brand architecture, and group structure. Leading the redesign of SailADV.com and Dgree.com, and defining the communication strategy and the shared value narrative: “Captains at the Center — Value for Owners, Crews, and Yards.”

l01.gif

Human–machine interaction & cognitive interface

Redesigning the entire interaction layer — advanced dashboards for mission-critical monitoring, new mobile and smartwatch interfaces for captains and crews, and a unified UX across shipyard, onboard, and remote operations.
This work leads to Sailly, the first cognitive agent built on D.gree’s foundational data.

Precision Medicines Robots

Multiply Labs develops robotic systems for automated precision medicine manufacturing, including cell and gene therapies. Based in San Francisco, the platform combines modular robotics and software to enable scalable, regulated biopharma production. As design advisor, I supervised brand and product language design — including supplier selection — supported the UX of the robot programming software, and defined aesthetic guidelines for the company’s headquarters. The system received a GOOD DESIGN Award.

ML_anim_basic-high.gif

Robots Making Precision Medicines

San Francisco, CA | 2022-today

Developed advanced robotic systems for pharma manufacturing, setting a new standard in the life sciences industry. These life-saving technologies ensure next-gen precision medicines reach patients efficiently and at scale.

Image by Aleks Dahlberg

COGNIVIX 

San Francisco, CA | 2025–today  

Cognivix builds physics-based imitation engines for industrial robot arms, enabling one-shot learning from human demonstration. Combining AI and 3D vision, the technology retrofits existing arms for high-mix, low-volume manufacturing and explores scalable Physical AI in adaptive factories.

layered_gif_800x452_1765140240874.gif

MEDIATE  

Milan, Italy | 2025-today
MEDIATE uses electromagnetic field sensing to detect human presence near industrial robots, enabling proactive collision avoidance and more fluid operation than impact-based cobots. The approach suits anthropomorphic robots and is backed by ABB Robotics (now SoftBank).

archie 3.gif

ARCHIE 

San Francisco, CA | 2020-2021
ARCHIE was designed during COVID-19 as a Robot-as-a-Service to restore trust in hotel rooms under emergency conditions. Operating before guests and staff, the robot performed UV sterilization in empty rooms, acting as a trusted companion for housekeeping teams. Blockchain certification ensured verifiable sanitization and enabled service-based transactions. The project stopped with the arrival of vaccines but remains a reference case for emergency-driven service robotics.

45806dfb-a3e7-44a7-a683-8dd6dba5f697_2822x1266.webp

GANIGA 

Florence, Italy | 2024-today
GANIGA applies AI and robotics to smart waste management at the point of disposal, generating detailed data on consumption, materials, and brands. Its custom foundational models recognize objects even when deformed or fragmented, turning everyday waste into structured data embedded in physical space.
SELECTED STARTUP TECHCRUNCH 2025

WHEN SPACE THE BECOME INTERFACES

#SpatialComputing #AR/VR/XR #ImmersiveWorkspaces #xPresence

Physical AI (with or without Glasses)

Since my student years — when early AR concepts led to the Apple Design Project — direct experimentation with spatial interaction gradually moved interfaces beyond screens and into physical and hybrid environments. Across industrial AR, deviceless systems, and early AI-driven virtual worlds, this work explores how space itself becomes an active interface — not only displaying information, but hosting behaviors, rules, and emerging forms of agency.

Slide13 copy.jpeg

APPLE DESIGN (STUDENT) PROJECT

While studying Interaction Design at Domus Academy, early conceptual AR work led our small student team to present in Cupertino, inside Apple’s internal theater at the headquarters. In 2024, archived materials from that project helped Apple’s Education team reconstruct parts of the Apple Design Project that had been lost over time.

Deviceless Spatial Computing

With remote/hybrid work, Intelligent office spaces are no longer optional — it is a necessity.

This work turns business physical environments into programmable interfaces, embedding digital layers into space to support hybrid collaboration and co-presence without personal devices.

To structure this approach, I designed the 7 Axis Experience Mapping Tool, used to evaluate experiences across multiple dimensions — including immersion, multimodality, time, and social density.

cokoon.gif

COKOON: WORKSPACE TRANSFORMATION

San Francisco, CA (2020-2022)
NTT COKOON explored how workspaces could be treated as programmable systems rather than static offices. Built as a deviceless spatial computing platform, it enabled immersive and hybrid collaboration by embedding digital layers directly into physical space. The project framed environments as active participants in work processes — a concept summarized as “This room is a robot.”

IMG_6107.jpg

San Francisco, CA (2020-2022)
LIVE DECK was designed as an authoring layer for spatial environments — a PowerPoint-like tool enabling teams to “dress” physical spaces with digital content.
Its mobile version acted as a remote control for the environment itself, allowing non-technical users to orchestrate spatial experiences in real time without relying on headsets or wearables.

Industrial AR & Physical AI

Industrial augmented reality brought digital context directly into physical workflows, enabling operators to perceive task-relevant information in situ rather than on separate screens. This integration of visual, spatial, and procedural data was an early instantiation of space functioning as an interface in real work environments.

context_computing_cover_lb-high.gif

Industrial Augmented Reality

Milan, Italy | 2008-2012

As President of Joinpad.net, I helped shape the vision for industrial remote-assistance systems across head-mounted displays and tablet-based devices, integrating spatial interfaces into real industrial workflows.

mindis.png

MINDIS

Milan, Italy (2025-6)
MINDIS applies geolocated AR interfaces integrated with BIM systems to support progress tracking and on-site check-in across complex construction sites. The work focuses on turning spatial data into actionable insight directly in the field, enabling faster decisions and clearer coordination between digital models and physical reality.

Early AI agents & the Metaverse

Long before “the Metaverse” became a buzzword, virtual worlds and early AI agents served as laboratories for presence, identity, and situated interaction. In these environments, computation had to respond to real human behavior — gaze, attention, decision rhythm, social coordination — revealing patterns that inform today’s embodied and hybrid systems. Second Life, full-body avatars, and early conversational agents acted as prototypes for immersive collaboration, shaping questions about agency, co-presence, and the boundaries between physical and digital. This perspective also informs my involvement with Metavethics, a Cambridge University-born think tank advancing sustainable, ethical, and inclusive approaches to metaverse development.

metavethcs.png
Mixed reality SL.mov-high.gif

Metaverses & Mixed Reality

Milan, Italy | 2006-2008

Years before the Metaverse, Second Life was a live laboratory for business, governance, and interaction design. Built systems, tested remote team governance models, and hybrid physical–virtual events anticipated today’s immersive collaboration models.
Same of these efforts were later recognized through awards and referenced in books.

Khumans OMG-high.gif

K-humans: Full-Body Virtual Assistants

Milan, Italy | 2005-2008

K-Humans combined early language understanding, rule-based intelligence, and real-time emotional expression to deliver responses through video avatars across web browsers and pre-smartphone Nokia devices. Deployed in live contexts — from virtual call centers to information kiosks — it anticipated conversational interfaces, multimodality, and embodied presence. Material from the project is now part of the Computer History Museum collection in Silicon Valley.

THE OBJECTS STARTED "TALKING"

#IoT #SmartObjects #EmbeddedIntelligence #Edge/Fog

Pioneering the Internet of Things 

As objects became connected, interaction shifted again — from controlling devices to listening to them. Sensors, embedded intelligence, and early AI transformed products into sources of signals, feedback, and behavior. This phase explored how physical objects began communicating state, intent, and context, reshaping services, infrastructures, and decision-making long before “smart” became a label — and before Physical AI emerged as a defining direction for the industry.

Widetag recap II-high.gif

WideTag: Visionary IoT Products

Redwood City, CA | 2008–2011

WideTag was one of California’s first IoT startups. We pioneered concepts like “crowdsourced ecology” and developed technologies to reduce residential energy consumption.

Redwood City, CA | 2008–2011

WideTag was my first startup in California, founded and led as CEO, and among the earliest IoT ventures in the Valley. Built on OpenSpime — an open-source, Erlang-based platform — it pioneered crowdsourced environmental sensing via smartphones. Projects like WideNoise produced the first global noise-pollution maps and entered the ADI Index for the Compasso d’Oro. The company gained early international visibility (CNN, NYTimes, WIRED) but closed as the 2008 financial crisis and the rapid shift of investor attention toward iPhone apps outpaced market readiness for air-quality and energy-consumption monitoring. | thank you, Bruce Sterling!

0000 Screen Shot 2022-04-05 at 8.48.47 PM.png

From industrial IoT with ABB to early FOG/edge computing

Milan, Italy → Palo Alto, CA | 2014–2018

While serving as Executive Digital Director at Design Group Italia, I led digital transformation programs for multiple industrial clients, including ABB (Medium Voltage division). The work combined technology scouting, product strategy, and digital product design across large international teams. As ABB consolidated its digital capabilities in California, the transition extended to Palo Alto, where early work with Nebbiolo Technologies contributed to the emergence of Fog Computing — a precursor to today’s Edge architectures — and broader industrial IoT projects in Silicon Valley.

Small Devices (Health)

My work in digital health took shape during my time as Global UX Director at Razorfish Healthware (2011–2013), where healthcare emerged as a core design domain rather than a vertical.

The focus on small devices and applied health technologies, however, was developed earlier and more concretely in my role as Executive Digital Director at Design Group Italia (DGI), where I originated and led projects with insurers such as SARA, Vittoria, and Generali (world’s top insurance groups). There, I led research, designed connected wearable and home devices, and guided teams behind award-winning consumer health products.

e4d4bf_8fca3341ddca49f18131aabc8ae111cf~mv2.gif

Generali Welion: Wellbeing, wearable & Trillio

Milan, Italy | San Francisco, CA | 2016–2020

This phase focused on wellbeing and small devices, primarily through a multi-layered collaboration with Generali: one of the largest user research programs on wearable devices of that period; the definition of Welion’s wellbeing services; the design of physical wellness office spaces within Generali’s Tower in Milan; and product design work on Geniot.

Within an open innovation framework, projects such as Trillio explored how small, human-centered devices could support therapy adherence for elderly people through personalized audio reminders, including familiar voices recorded by caregivers.

The collaboration with Generali continued over the years, later evolving into technology scouting assignments conducted from Palo Alto, CA.

d-heart 6 sec.mov-high.gif

D-EARTH: From invention to product

Milan, Italy | 2018-2020
At Design Group Italia, I introduced a startup-oriented offering specifically designed to bridge the gap between invention and product — addressing product–market fit and assembling integrated teams across industrial design, design engineering, and UX.

D-Heart Startup was chosen because it fits the definition of an exponential object, following the 6Ds framework of Singularity University — a concrete example of how exponential thinking can turn a medical invention into a scalable, accessible product.

This approach transformed a prototype into an affordable, smartphone-based ECG device, impacting the lives of thousands of people, and culminated in D-Heart winning the Compasso d’Oro, bringing the award back to DGI after more than twenty years.

BEFORE THE CLOUD

#WebBuildingBlocks #MobileFoundations #ClientSideLogic #IxDprimitives

Before the Cloud...

I worked on early real-time news platforms and major sports websites of the first Italian internet era, including CanaleSport. As mobile emerged, I founded a consulting company operating between Milan and Boston, growing to 100+ people, and led the design of the first 3G user interfaces for Andala / H3G.

In that pre-cloud phase, eye-tracking and early rule-based “AI” assistants were already used to make content usable on small Nokia screens. In 2001, in a kitchen in Boston, a local founder showed me an early prototype of what would later be called Android.

H3G early UX-high.gif

First Mobile 3G UI/UX (AltoProfilo) 

Milan, Italy | 1999–2001

The Nokia world was hierarchical and strictly text-based. With the introduction of 3G, graphics began to emerge and devices gained limited capabilities to render HTML — in an early, partial form. It promised a future still shaped by heavy technical constraints: patchy bandwidth, limited processing power, and low device availability. That first generation of mobile interfaces gave form to this transition, outlining an early version of a connected future.

eye tracking.mov-high.gif

Eye Tracking for Mobile & Medical Devices (patents)

Milan, Italy | 2003–2005

Early work with Tobii eye-trackers, starting from their initial release, led — in collaboration with SRLABS — to patents on foundational gaze-based interaction techniques, including point-and-click, with initial applications in healthcare.

As 3G networks became more reliable, we designed and patented the Intelligent Cropping System: a real-time eye-tracking approach that reframed video around where attention actually falls — making sports and media content enjoyable for the first time on small-screen devices.

Pink Poppy Flowers

I write about innovation strategy, human behavior, and design at the intersection of AI and robotics. 

substack logo white.png

Connect with me on LinkedIn, where you’ll find my email and mobile on my profile page. Whastapp First.

LinkedIn logo white.png
bottom of page