Founded in 1999 by Philip Rosedale, in San Francisco
The inspirations:
Snowcrash
Previous Virtual Worlds
The Early Years
Why Then?
Broadband was becoming ubiquitous.
So was cheap consumer 3D hardware acceleration.
The Early Years
Before Beta
Built the core technology and tools.
Considered building content ourselves, but realized that costs were prohibitive.
Give the tools to the users. They will build the world for us.
The Early Years
Launch
Second Life was launched in June 2003
A small core of dedicated users, but uptake was slow.
Why?
The Early Years
Survival of the Fittest
We had built a system by which you were taxed for resources, and were given L$ to buy resources
depending on the quality of your content (from ratings).
Elegant, but completely ruthless. Too difficult for people to understand - too much like work.
If you weren't constantly creating newer, better content, your creations would get DELETED.
Users got angry!
The Early Years
The Breakthrough
Instead of limiting what people can do (like a game),
allow people to pay for as many resources as they want to use!
Make it possible for users to benefit financially by content that they create, by supporting
a L$<->US$ exchange (GOM first, then Lindex)
Accelerating Growth
Growth slowly starts to accelerate. This is good!
2004 and 2005 see steady growth, doubling around every 6 months. But then, in 2006...
Accelerating Growth
Account growth
3/06 - 150K accounts
6/06 - 300K accounts
9/06 - 735K accounts
10/18/06 - 1M accounts
12/14/06 - 2M accounts
1/28/07 - 3M accounts
2/24/07 - 4M accounts
SL Map: 2003
SL Map: 2005
SL Map: 2006
SL Map: 2007 (March)
Peak Concurrency
20K at end of December
Almost 40K as of 3/18/07
Economic Growth
Our strengths are our weaknesses
Giving our users powerful tools to create whatever they want is a good thing.
But...
With power comes responsibility!
Denial of service attacks using our own system!
Self-replicating objects - gray goo.
Gray Goo
Gray Goo
Gray Goo
Gray Goo
Under the Hood
So how does it all work?
Second Life is a cluster of distributed machines running in a grid topology
Under the Hood
Under the Hood
Simulator Nodes
Every region on the map (a 256mx256m area) is being run by one simulator process.
Each simulator process runs on one physical core
Physics simulation, the scripting engine, and data transmission to the clients
are all managed by the simulator process.
Under the Hood
Cheap, Identical, Interchangeable Hardware
Regions are not tied to physical hardware.
Simulator nodes are commodity Linux boxes - currently 4 core 1U rackmounts
Buy them a rack at a time
Now adding hardware at a phenomenal rate
Three racks a week right now
3 racks x 41 1U servers x 4 cores = 492 regions/week!
Under the Hood
Web Services
Abstracts away the complexity of the underlying topology
Allows us to use web page caching technology to improve performance (squid)
Simplifies implementation by using scripting languages (Python, PHP)
Under the Hood
"Inventory" databases
Store references to all of the objects in world
Horizontally partitioned among accounts
Under the Hood
That's great, everything's partitioned. You can scale forever!
Well, actually...
Under the Hood
Central Database
One centralized database stores most non-inventory user information, as well as region and group information.
This is bad!
Currently that database handles > 2000 queries/second at peak concurrency
Under the Hood
Asset Server
A big, big (80+TB) WebDAV file store that stores asset data
simstates (world state)
object data
texture data
everything in user inventory
Where are we going?
Scale, scale, scale!
Remove all of the centralized systems.
Continue to encourage user creativity by improving the tools they have to work with.
We need more people to help us do all these things!