SL Viewer 3.0 and Mesh

N.B. There are lots of better places to find specific details on SL viewer and mesh. If you want more depth, follow the links in this article.

The current SL viewer is now 3.0 (V3), released just within the last few days, I believe. I haven’t tried it yet. The main difference in V3 is support of mesh technology. (More about that below.) I use several SL third-party viewers (TPVs) depending on what I need to do. A full list of certified viewer is available at:

For day to day use, I normally run Phoenix. This is by far the most used third party viewer. Pros: It’s intuitive (especially for long-time users) and has many (many) useful features for builders and just overall usability. It does support SL Viewer 2.x (V2) features such as avatar tattoo and alpha layers. Has some cool Windlight and other environmental controls that are easily accessible. Cons: It does not support certain features such as shared media (i.e., web-on-a-prim) or mesh. I also find on my computer that Phoenix tends to crash more than other viewers.

In anticipation of mesh, the developers of Phoenix have created Firestorm. This is a more V2 compatible viewer that will likely be a more robust alternative to SL’s V3. I’ve only begun to use it, expecting that Phoenix will eventually be phased out.

If I absolutely need a feature of SL V2 (basically shared media), I do use it on occasion. It was pretty much universally despised from the first day (thus the proliferation of third party alternatives), but has improved significantly since then. However it’s still awkward for builders and other “power users” and lacks many of the TPV advantages.

For taking pictures in SL, I use either Imprudence or Kirsten’s Viewer. I’ve found Imprudence to be the most stable of any viewer and it has most of the features included in Phoenix. It is my preferred viewer for taking pictures as the resolution seems to be best. (May just be an issue with anti-aliasing settings on other viewers, but I haven’t found a solution in Phoenix.) Kirsten’s was the first to implement shadow rendering. (Shadows in SL are very cool when well rendered, and other viewers have since implemented the feature, but be aware that the rendering load on your computer is massive and will tend to crash anything less than a very high end graphics card.) It has a number of other settings that are designed especially for photography and machinima (i.e., virtual cinematography), but it is less friendly for day to day use.

The major change in 3.0 is the ability to render mesh objects. As of Tuesday mesh technology (very high resolution modeling used in dedicated computer generated cinema and dedicated game consoles) is now available anywhere in SL. It has been anticipated for the last few years and has been in beta for some time. I’ve been to the beta grid and spoken to people there trying to work with them. Some of their work is stunning.

The consequences of this technology could be vast or minimal, depending on how it’s adopted. Creating meshes is quite a bit more complex than creating sculpts, which are difficult enough. Only serious creators are getting into it at this point. In the beta grid, users were able to create mesh objects in Google Sketchup and export them to a SL compatible file. There seems to be some question now whether that is possible with the full SL deployment. There are other free programs that can be used (most notably Blender), but there is no other “simple” tool for creating meshes as far as I know. That could change.

There has been a lot of excitement about the advent of mesh. Both good and bad. The technology itself renders beautiful complex objects. In fact, there have always been meshes in SL in the form of your avatar body, which is actually a relatively simple mesh. The important thing to know about meshes is that it is made up of many triangles in a kind of web fabric (hence the use of the term “mesh”).

At the risk of oversimplifying the situation, the core difference between a sculpt and a mesh is the number of triangles it uses. When you create a sculpt, it may be a 64×64 pixel sculpt map or smaller. You can apply that to a prim in SL and you can manipulate the size of that prim and nobody really cares how convoluted or big it is. But with a mesh you have a whole set of vectors for those triangles. And there could be thousands or tens of thousands of them in a mesh, and if you make it bigger it takes more triangles. The important point is that the more triangles a mesh uses, the more server resources it commands, and THE MORE IT COSTS. That’s right. SL in its infinite wisdom has determined a formula for assigning a Prim Equivalent (PE) Cost. So you may create a simple mesh that has maybe 1 PE, but if you just make it bigger it can suddenly show 100 PE. I saw meshes in the beta grid that went from maybe 24 to over 2000 PE just by scaling it. When a sim has a total limit of 15K prims, you can see how a bunch of meshes could quickly hit your limit. So mesh developers are motivated to use the smallest possible objects with the least amount of detail.

In a peculiar twist, the way PEs are calculated includes scripting. If you have a simple notecard giver script in a mesh, it will cost more. The mesh developers I’ve talked to rightfully think this is bogus. In any case, the promise of mesh is here, but it remains to be seen when or whether it will be widely adopted.

Ubiquity as Silent Revolution

I came across an interesting article titled “What happens when computers stop shrinking?,” by popular and articulate physicist Michio Kaku about the apparent impending demise of Moore’s Law.  One tangential statement in the article got me thinking:

“The destiny of computers — like other mass technologies like electricity, paper, and running water — is to become invisible, that is, to disappear into the fabric of our lives, to be everywhere and nowhere, silently and seamlessly carrying out our wishes.”

By “computers” Kaku is really talking about “computing devices,”   including everything from musical greeting cards to automobile engines to spacecraft. (He mentions a point of trivia that a modern cell phone has more computing power than all of NASA had in 1969 when they first went to the moon.)

The ubiquity of logic devices in everyday life is likely to continue at an exponential rate, regardless of the scale issues cited by Kaku. If things can’t get smaller, people will find ways to make them more efficient. Just because silicon has limitations doesn’t mean there are no other options (like graphene based chips and holographic architectures).

So what does this mean for living in the 21st century? At some point we will forget about the numbers. How fast our cpus are in Megaflops. Gigabytes, terabytes, petabytes, and on and on past zettabytes (10^21 bytes, roughly the total world output of data).  At some point it becomes virtually infinite because we will have more capacity than we can possibly use.

This happened with cell phone service and early Internet access. I recall subscribing to Prodigy and getting something like 90 minutes of connect time a month. There was no Web yet and about all there was to do was email and Usenet groups. I’d log in on my 2400 baud modem, exchange emails, and log off. Then someone decided the unused capacity was sufficient that they could offer unlimited access for a reasonable price. Similarly the iPhone 4 screen resolution, which claims to be greater than the eye can perceive (the so-called “Retina Display”).

Soon we will simply assume any object we use (or interface with) will have some kind of logical device embedded in it. As high resolution displays and intuitive interface technologies become more portable, desktop computers and monitors will be made obsolete.  We will be wearing our interface with the Net and it will be a part of everything we do. When shopping, we’ll have displays showing us information about any product we’re looking at. Our Net interface will become entirely intuitive and we will wonder how we ever got along without it.

This may seem  futuristic and difficult to imagine, but the reality is that the tech industry is pushing hard to make it happen sooner than later. Social media are making us dependent on the constant flow of data about our interests and relationships. We will still have our individuality, I think, but our connection will be an intimate exchange with the global mind.  If you use text chat a lot, you are probably accustomed, as I am, to using Google or Wikipedia to grab a bite of data — the word you were looking for or its spelling — or even to carry on a conversation in another language.

“I feel a disturbance in the force.” – Obiwan

Such immediate real-time interaction promises to become more and more part of our intuitive means for everyday communication. If we are not able to connect with the people or data we want, we will sense that there’s something wrong. Not being able to find out what I want to know causes stress. I’m sure it’s akin to an addictive response. When I query Google and an answer does not appear easily, I will assume that I have posed the query improperly. And if I am still stymied I will feel frustrated because I’m fairly certain an answer must be there.

“I know this is the answer because I asked myself and this is what I said.”

We’ll be able to sense changes in the flow of information and will seek answers as to the causes. If the flow of data from a certain sector increases, it will likely mean something important is happening there so we will turn attention to it.
What this means for humanity is likely to be the source of great debate over the coming decade and more. As we become more connected the lines between me and not-me begin to blur. The obvious parallel is the vision of Star Trek’s (TNG) Borg collective.

Patrick Stewart as Locutus, the assimilated Je...

Image via Wikipedia

Such a vision can certainly be frightening, but one could also look at it as a potential good. Whenever there is a problem, we are not alone. We become part of a collective intelligence that is much greater than any individual could hope to be. A world in which cooperation is essential for the health of the whole. This vision does not preclude individuality, or even subversion. But difference becomes a matter of will rather than psychosis. When we have communication and knowledge, we can make more intelligent choices that support the collective good. And that includes challenging collective assumptions in the name of art.

“If there’s nothing wrong with me… maybe there’s something wrong with the universe.”  –Dr. Crusher (Star Trek TNG)


Machinima is the art of movie making in virtual worlds. An immersive environment like Second Life allows users to create extravagant sets and characters and to move the camera and actors with fairly simple scripts. This is a fairly interesting new artform that is just beginning to get serious attention. The National Endowment for the Arts has recently opened its grant programs to creative work in virtual “game” environments. (The use of the term “game” is for purposes of public understanding, even though the kinds of activities that go on are generally not at all game-like.)

I have barely dabbled in the technology for machinima. It’s not much different in many ways from any other cinematic art form as far as storyboarding and editing. But the actual video production in virtual worlds is quite different. You have complete control over the appearance of your environment, from land forms to the color of the sky to the density of water. You can freeze the time of day and set the sun wherever you want. You can have complete control over camera position and movement, either my scripting or using a joystick device. The structures and props used can be found or bought or specially created. The avatar actors can be made any reasonable size, gender, or species and their costuming and makeup are unlimited. You can have a 60 meter tall dragon or a small anthropomorphic possum. All this at minimal or no cost.

There are limits, of course. The size of the set is usually limited, the animations for the actors are fairly crude (especially the mouth when speaking), the range of facial gestures is limited, etc. But the advantages of being able to produce fairly sophisticated cinema on a budget is compelling.

As users learn new techniques and become more adept with the art form, more and more impressive work is emerging. A prime example of this is the MachinimUWA III competition currently in judging at the University of Western Australia’s presence in Second Life. There are no less than 50 entries being considered for a top prize of about $400 US. You may think this doesn’t sound like much and you’d be correct. But in the context of virtual worlds, this is one of the top prizes ever awarded.  (In SL, the prizes are awarded in Linden dollars at an exchange rate of approximately 250 to the dollar. The top prize is 100k Lindens and the total prize pool is 66ok Lindens.) Rather that think of this as a paltry sum, it is wiser to understand how much creative work can be generated with very little in resources. With a mere $2500 US or so, UWA has provided the incentive to produce 50 pretty decent works of art. I’d say this was a heck of an investment.

A recent development in the competition has been the engaged involvement of world-famous director Peter Greenaway as one of the judges.  He and (especially) his wife have been involved in the art scene in SL for some time. UWA honcho Jay Jay Jegathesan (JayJay Zifanwe in SL) managed to get an important interview with Greenaway (from which the title of this post is taken) about virtual cinema which is compelling reading. I hope all the judges considers what he has to say when assessing the entries. It could make for some very interesting results.

The complete list of entries for MachinimUWA III is on the UWA website. The awards ceremony will be held at the BOSL Amphitheatre at UWA, 6AM SLT (GMT-7) Sunday May 22.

Review: A Consumer Guide to Virtual Worlds

I just purchased a PDF download from the Association of Virtual Worlds (an organization that provides networking and other resources primarily to business and education enterprises), called A Consumer Guide to Virtual Worlds. It’s basically a list (over 375 listings total) of grids, but also includes social networking sites like Facebook and MySpace, as well as content sharing sites like Flickr. I also noticed it includes 3Ds Max, which is a 3D graphic design and rendering program, not really a virtual world in the sense that anyone else can share a space. The entries are extremely concise, with not more than a sentence or two of description. They are not rated, but they have a screen shot (usually a home page) for each and they do add category designations (e.g., adult, teen, kids, games, enterprise, MMPORG, social network, etc.). The list itself is alphabetical, which is fine if you want to look something up. It would be a more useful book if the user could interact with and sort the listings—a natural advantage of a database over a printed directory. The PDF is searchable, but that doesn’t allow for sorting and filtering the listings. I presume it was decided to go with a static PDF in order to make it a marketable commodity. (They obviously sold me on it… )

AVW Consumer Guide to Virtual WorldsOn first glance I’ve already noticed a few errors, including some really outdated information. I get the impression that the editors did not look very closely at a lot of these sites. For example, I looked into OpenCroquet a few years ago. It’s an open source platform for developing virtual worlds. Development on that project effectively ended upon its release, with new development happening using the Croquet foundation to branch off with the OpenCobalt project. You can go back and get the OpenCroquet source and start over from there, but I don’t know why anyone would.

There are other errors such as the listing for AlphaWorld, with an image and link to ActiveWorlds. This is not technically incorrect as AlphaWorld was a former name for ActiveWorlds (it was renamed in 1995) and remains the largest virtual space withing ActiveWorlds. The listing is simply not sufficiently descriptive and the link is not helpful.

If this were a print publication it might have gone through some more rigorous proofreading and fact checking. Its primary usefulness lay in its scope. There are a lot of virtual worlds out there and more popping up every day (another reason to make a list like this more dynamic). If nothing else, it may stimulate some thinking about the breadth of the metaverse.

The download is $5.99 from:

Making Music Objects in SL

The shortcomings of music and sound in SL are massive. There are various ways to do sound:

External feeds use the embedded Quicktime player to render web media. The media can be an asynchronous file download or a (relatively) synchronous stream. The latter is how live music is done. In either case you have to set the source URL in the parcel media settings, available only to the parcel owner or via an object owned by the parcel owner and set to allow others to use.

When there is a media URL set, the viewer’s media player controls will show play/stop buttons.  (This information applies generally to both audio and video media, which are separate, but operate similarly.) You can set your viewer to automatically play media when you enter a parcel, but most people don’t do that because it can be a bit jarring to cross a parcel boundary and go from a classical harp solo to a death metal band without warning.  So if it’s important for people to hear what’s playing on your parcel, you generally have to let them know that.

Another option is the new so-called “shared media” which allows anyone with build permissions on a parcel to create a prim with a web player on any or all surfaces.  There are so many complications with that (not the least of which is that it requires the largely reviled Viewer 2.x), that I have only played with it a little and have not yet found it worth bothering with. I’m sure it will be much more widely used when it’s incorporated into third party viewers that are actually usable.

Aside from external media feeds, there are sound emitting objects. The most common of these are ambient sound generators with birds and crickets or whales and seagulls or whatever. These operate by looping short WAV files. An object can only play one file at a time. The files are monaural, but are spatial, i.e., you can locate the sound emitter by stereo location and volume.

The problem with sound emitting objects is that WAV files tend to be large files that are slow to load. Creating sound emitting musical instruments is primitive at best. I built a carillon that’s kind of impressive. It’s about 2 octaves, chromatic. You touch a key and it moves and the clapper on an associated bell also moves. The bell is for visual effect only. The sound actually comes from the key. (I considered changing that so it’s more spatially accurate, but the size of the thing results in a fairly dramatic difference in volume between the closer and further bells.) With something like 30 keys, I had to create and upload individual WAV files for each. The script plays whatever sound file is in it, so I was able to clone the keys and just drop the appropriate pitch in each. Playing it requires the causes files to download in the viewer where they are cached. So you basically have to play all the notes and then wait a minute for them all to cache before you can actually play music without delays. That is also true for anyone listening. So if you’re playing and someone new arrives, what they hear will be disjointed and weird until the sounds have cached on their viewer. (There is a preload sound option, but it’s not efficient and can lag the sim while it scans for people and runs whether they want the sounds or not.)

The kicker is that no WAV file can exceed 10 seconds in length. And it costs 10 Lindens (a little over 4 cents) to upload each file. You can daisy chain files together to run in sequence, but each file still needs to load and cache. You can also loop a file to play a continuously repeating sound, but it’s nearly impossible to do this without an audible gap at the repeat. Best you can do is to use a sound that is sufficiently chaotic that the blip becomes inaudible.

There are a number of musical instruments that use this technology, first pioneered several years ago by Robbie Dingo, best known for his ubiquitous Elven Drums. The drums are very clever in that they are able to play any of several rhythm loops in sync with other instruments. There is very little active “playing” by the user. But with his “hyperflute” and “hypercello” you are given a HUD that resembles a keyboard that can be played with the mouse.

Other instruments are primarily for show. There are tons of pianos around that play music. Most play brief excerpts using chained files. In one extraordinary example, a rather lovely harpsichord plays the entire Bach Goldberg Variations. (You have to load and play each one separately. The music files are actually in the sheet music prims — a common means of making pianos with multiple tunes.) I calculated that this would required approximately 420 10-second WAV files, each uploaded at a cost of about 4 cents, or $16.80 US. The upload cost is negligible compared to the time and energy it must have taken to dice these tracks into 10 second files. I presume it was automated somehow.

There is hope that alternative means of live sound production may evolve. There are a lot of people interested in making music in virtual worlds. It is not sufficient to want it, of course. At some point it has to integrate with the viewer. As viewer development goes into high gear in the open source community, perhaps developments with emerge sooner.

Taylor Education Building in SL

Taylor Education Building is an iconic building on the campus of the University of Kentucky. The College of Education asked me to build a replica of the facade for their space on University of KY island in Second Life. I decided to build it roughly to scale, which makes it one of the more imposing builds on the island.

Virtual Haoshang Bridge

On the way to a massive Tang Dynasty Buddha statue carved into a rock wall near Leshan in  southern Sichuan, China, there is an ornate pedestrian bridge over a small river near the confluence of the Minjiang and Dadu Rivers. I happened across photos of this lovely structure online and thought it would be an interesting large scale building project in SL. I have only just begun the project and I have no idea what I might do with it once completed.

The obvious difficulty, as with any replica of a rl structure, is the level of detail. Almost anything is possible given unlimited prims to work with, but that is certainly not the case here. I’m working on a temporary platform over the sandbox area of the University of Kentucky’s sim. To save prims I used a 10x60x40 tube with hole at max and cut path for the main span. The end masonry has some overlapping prims that will need detailing. The current build extends 120 meters long.

The octagonal structures on the ends are necessarily primmy, since you can’t make a hollowed octagon. I found an octagon sculpt that is not bad, but of course it cannot be cut. I’ll be able to use it for the roof framing, but the walls and rails will have to be constructed.

More pictures to come!

Skills Learning in Second Life

I’m a big fan of building classes. Most formal building “schools” train and mentor their instructors and using a standard format that moves you through the class so you get all the info you need and are done at the end of the hour (though they usually hang out a while to answer questions after). Most basic project-oriented classes will take you through several essential skills, like prim manipulation and linking, texturing, and dropping in a script. They may go so far as to have you open a script and customize a variable, but nothing complicated unless the class is specifically script oriented.

The standard building class in SL is one hour. Note that people attending “free” classes are strongly encouraged to tip the instructor and venue. In most cases instructor tips are shared with the venue, so I usually just tip the instructor. My usual tip is 200-300L. Paid classes are sometimes worth the money, especially for more advanced training. They tend to have more serious students and the cost is usually pretty reasonable. There are a couple of reputable schools that charge fees.

For free classes, New Citizens Inc. (NCI) has been around for a very long time and is probably the most popular place for newbies to learn new skills. They also have social events that are noob friendly. Classes every day. Everything is always free, I believe. I haven’t taken a class anywhere in a long time. In my experience NCI tends to be crowded and there’s always someone in the more advanced classes asking “what’s a prim?”. The really basic classes can be a bit chaotic, but the teachers are usually trained to handle disruptions.

I came across this place just yesterday. Happy Hippo has many free classes (tips strongly encouraged) and they seem to have a good reputation. They also have free and paid tutorial kits. Mostly really basic objects. Build a table or chair or lamp or something.

I’ve never taken a class at Builder’s Brewery, but they have a good reputation. They have sponsored some high profile architectural competitions, etc. I don’t have any information about their classes (whether paid or free) other than the schedule. They sell a lot of building materials, especially textures.

I’ve only taken one class at Rockliffe University and it was a long time ago. They charge for classes, but they have programs for certification in virtual skills and so on. They have a good reputation. Their online calendar appears to be current, but their website is a bit ragged and dated.

Jenette Forager holds weekly tools.jam sessions sponsored by her Epoch Institute. They’re on Tuesdays at 12:30PM.  Go to the Institute’s site at IMMERSION: tools.jam, Wells (87, 53, 26) and touch the Subscribe-o-matic to get notices.  Tools.jam often has a product developer come in to talk about their latest cool gizmo, usually with an education-orientation. The compelling reason to go, aside from previewing some pretty cool gadgets, is that often the presenter will offer free copies of the things to the people in attendance.

BTW, have you seen this TED lecture by Jane McGonigal? Claims we should be spending more time playing computer games in order to solve the world’s problems. Works for me…

Using OpenSim for K-12

I was asked my opinion about OpenSimulator for high school educational use.  The parties were considering alternatives to the Second Life teen grid because of costs and the need for background checks and so on for the adults. OpenSimulator (OpenSim) is the same game engine used to run Second life. Or at least it was. SL has developed their platform well beyond anything OpenSim has done, but OpenSim does have advantages. You can run your own sim on your own computer; you can link that to others through services like OpenGrid; or you can rent space on someone else’s server. Here was my reply:

I have not tried to establish an OpenSim server myself. I haven’t really had the need to have an offline development space and I don’t see much point to it otherwise unless one is planning to connect it to OpenGrid or something. Anyway, I honestly don’t know what it takes to set up an OpenSim server, especially one that is accessible by others. My impression is that it’s not that hard to implement out of the box, but keeping up with updates and managing the configurations can be a challenge. I don’t think I’d recommend it without having someone knowledgeable to maintain it.

You might want to check out ReactionGrid. It’s a PG grid, open to anyone. It’s very affordable, especially if you buy multiple sims. Some people I know got 4 sims and they pay something like $75 or $100 a month total. Don’t remember exactly. Each sim supports 45K prims (total 180K!) and there are a lot of interesting things you can do there that you can’t do on SL. (Create megaprims up to 256 meters, link distant objects, etc.) It’s small enough that you can get pretty good personalized service.

I met someone there who was a middle school teacher and her kids were coming in to build things there. I think they’re not allowed off their island, though there is nothing to stop them, as far as I know. She uses login credentials for each student that are all registered to her account. It seems a safe environment, and anyone not adhering to PG can be reported and swift action would be taken.

HOWEVER: The advantage to a hosted sim is, of course, that they take care of the tech details so you can be concerned with actually using it. That’s a huge plus. The disadvantage to any of the OpenSim grids is that they do not perform at a level anything like what you are used to in SL. For me, it’s very much like starting over 4 years ago. I uses the old physics engine, it crashes a lot (not just your viewer, but the sim itself), it’s unbelievably laggy even on an empty sim. I find it incredibly frustrating to do work in that environment, but I know others who are thriving and spend most of their creative time there.

OpenSim Scripting Language (OSL) is very similar to LSL, but there are a few key differences that mean a lot of LSL scripts just don’t work there or need to be tweaked. OSL is a stricter language, so syntax is less forgiving. Unfortunately, there is no comprehensive resource for you to find out how to use it correctly. The documentation, frankly, sucks. However OSL also has some interesting features that LSL lacks, including programming in C+. I don’t know anything much about that, but it does present opportunities for advanced programmers to do interesting things.

So bottom line is, definitely check out ReactionGrid. Talk to the people there. They seem friendly and helpful. You would certainly have more flexibility at less cost than SL. If you don’t need the social and educational opportunities of a wider social grid like SL, it could be a good solution.

A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.