Kepler’s Orbital GPU Cluster Expands With Sophia Space Partnership to Test In-Orbit Software Deployment

This article was generated by AI and cites original sources.

Kepler Communications has opened a path for running GPU workloads in orbit. The company’s largest in-space compute cluster, launched in January, is now working with a new customer, Sophia Space, to test Sophia’s operating system and deploy it across GPUs on Kepler’s satellites. According to TechCrunch, the move represents an early step toward making space-based sensing more useful by performing edge processing where data is collected, rather than sending everything back to Earth.

Kepler’s Orbital Compute Cluster

At the center of the announcement is a cluster built for in-orbit compute and networked connectivity. Kepler currently operates about 40 Nvidia Orin edge processors across 10 operational satellites, all linked together by laser communications links. The company has 18 customers, and Sophia Space will use the constellation to validate software that will run on Sophia’s own passively-cooled space computer.

Kepler describes itself not as a conventional data center operator, but as infrastructure for applications in space. According to CEO Mina Mitry, the company aims to provide a layer that can offer network services for other satellites and for “drones and aircraft in the sky below,” positioning its system as a platform for distributed processing rather than a single monolithic facility.

The cluster was launched in January and includes about 40 Nvidia Orin edge processors onboard 10 operational satellites. The satellites are connected via laser communications links, which enables the compute resources to be organized as a networked system rather than as separate isolated units in orbit.

Sophia Space’s Passively-Cooled Design and Thermal Challenges

Sophia Space is developing passively-cooled space computers. TechCrunch identifies thermal management as “one of the key challenges for large-scale data centers in orbit,” and positions passive cooling as a way to avoid the need to build and launch “heavy, expensive active-cooling systems.”

In the partnership, Sophia will upload its proprietary operating system to one of Kepler’s satellites and attempt to configure it across six GPUs on two spacecraft. TechCrunch notes that similar activity is “table stakes” in terrestrial data centers, but this is “the first time it will be attempted in orbit.” The test is designed to validate whether a software stack and deployment process can be executed under orbital conditions. Making sure the software works in orbit will be a key de-risking exercise for Sophia ahead of its first planned satellite launch in late 2027.

Industry Timeline and Edge Processing Strategy

TechCrunch situates Kepler’s cluster within a broader industry timeline. “Experts expect that we won’t see large-scale data centers like those envisioned by SpaceX or Blue Origin until the 2030s.” In the near term, the first step will be processing data collected in orbit to improve the capabilities of space-based sensors used by private companies and government agencies.

Kepler’s approach focuses on edge processing—handling data where it is collected for faster responsiveness. Mitry ties the architecture to how workloads should be distributed. In a statement to TechCrunch, Mitry argues for “more inference than training,” saying Kepler wants “more distributed GPUs that do inference, rather than one superpower GPU that has the training workload capacity.”

Mitry’s reasoning is also power-oriented. According to TechCrunch, Mitry stated: “If this thing consumes kilowatts of power and you’re only running at 10% of the time, then that’s not super helpful. In our case, our GPUs are running 100% of the time.” In space, the power budget and utilization profile can strongly influence whether a system should be designed for continuous edge inference versus bursts of training.

Near-Term Implications for In-Orbit Software and Networking

Kepler’s partnership with Sophia serves as a validation step for both networked compute and software deployment. The immediate goal—uploading Sophia’s operating system and configuring it across six GPUs on two spacecraft—could serve as an early proof point that orchestration patterns familiar from terrestrial data centers can be adapted to orbital constraints.

Kepler’s longer-term expectation is stated in the TechCrunch report: as the sector matures, it expects to start linking up with third-party satellites to provide networking and processing services. This could suggest a shift from isolated mission payloads toward shared infrastructure models where compute resources can be offered across a constellation.

The report’s timeline—large-scale space data centers not expected until the 2030s—indicates that companies will likely focus on “processing data collected in orbit” as the first scalable value proposition. In that context, the Kepler-Sophia test is less about building a full “data center in space” and more about verifying that distributed inference-style workloads can run reliably when software is deployed and configured across multiple orbital nodes.

Source: TechCrunch