Sunday, September 13, 2009

0
Machinima Viewer: Take 2

At this point I'm fairly sure its no mystery that the concept of utilizing the Metaverse for machinima fascinates me. While it is still in a awkward stage, I feel that machinima will one day be considered yet another legitimate form of storytelling. The Metaverse holds particular promise as a production platform because unlike other systems, the creation of unique content is widespread, it is stylistically flexible, and it is unconstrained by conceits of what should be done with the platform. This coupled with the ability for widespread collaboration and less draconian copyright policies makes the system ideal for the production of machinima. With that in mind, it is worth highlighting the fact that while the potential for greatness exists, the tools to leverage this potential have yet to be built.

In a previous post, I described a very ambitious viewer project that would have allowed machinimatographers (who I'm going to call 'Toggers from now on for sanity's sake) to capture and edit machinima footage in new and powerful ways. While I still hold that system as an ideal, several months of continued introspection has revealed that the construction of such a system would be a herculanean effort with technical challenges that few are willing to tackle at this point in time. A different approach must be taken as a first step. A 'Togger-friendly viewer is possible with existing code, and advanced functionality could be added as development continues. First, a existing viewer could be stripped down to the functionality relevant to 'Toggers and then enhanced with new functions.

Here's a list of the basic functionality a Togger would need for setting up a scene and shooting:
Avatar movement
Camera controls
Teleportation
Basic communication (Chat, IM)
Inventory access
Object placement and positioning

Advanced Functionality:
Fine control over camera behavior (maximum speed, camera roll, zoom control)
Local camera view bookmarking
Bookmark-to-bookmark camera animation
Timeline-based camera animation controls
Aspect ratio window sizing
Shot composition screen overlays
Easier toggling of interface elements
Triggered animation control over actor's avatars (RestrainedLife API?)
Real-time shadows
Built-in footage capture (ditch FRAPS)
Local texture substitution

There is even more functionality that could be added, but that's for another post. I've restrained myself from giving this one a cheesy psuedo-title in part because I don't want to presume on someone else, and partly because I just can't think of anything else witty right now. Any takers?


Continue Reading

0
Davinci: Open Source Engineering Tools (Part 2)

Continued from Part 1

So, without further pontification, let me introduce you to the project I used to call Starshine, but that I now call Davinci (with no small amount of irony). Davinci is a family of four programs designed to work in close conjunction with each other: Scrawl, Workshop, Notion, and Plaza. Scrawl is a CAD/CAE solution designed to allow contributors to design and modify individual pieces easily and efficiently while remaining faithful to the requirements of the larger project. Workshop is a high-level assembly interface for connecting together designs made with Scrawl into larger, more complex projects. Notion is a modular simulation framework which is utilized by Scrawl and Workshop to simulate a wide range of behaviors and conditions. Finally, Plaza is a online versioning system akin to a SVN but custom built for organizing and storing large Davinci projects. Together they form a tight ecosystem of functionality that allows widespread collaboration. Allow me to describe each system with a bit more detail.

Scrawl's main functionality is fine-level design of individual parts within a larger project. Like most CAD/CAE programs, it would allow for both schematic and parametric (perspective) views of a design, the ability to define the solid geometry of the design, and define the physical properties of parts. To facilitate flexibility, Scrawl would allow the import and export of geometry from other programs outside of Davinci, a feature also shared by many CAD/CAE programs. However, Scrawl would allow for all of this within the context of the larger design. Should such limitations exist, the interface will show the physical space constraints, connection points, and other relevant data in relation to the design. The design can be tested by Notion, which will in turn create a metadata file, including the performance of the piece for use by Workshop. The files Scrawl will save out will be rich in metadata, including physical properties as well as the creator's name and the intellectual property license under which the creator wishes the work to be shared.

By itself Scrawl is nothing particularly special. Its true potential comes from its tight integration with Workshop, which allows for the assembly of higher-order designs. Workshop is all about applying a object-oriented approach to the design of multi-part inventions. Users can load Scrawl files and even other Workshop files into a single assembly environment. From the interface, users can combine these sub-components together much in the same manner LEGO piece can be pieced together. The user can define the level of binding between individual pieces, approximating bolts, welds, or greased joints. The user can inspect the metadata of each individual piece, as well as open it, either opening its respective Scrawl file or its Workshop file. This power to include other Workshop files, I feel, is a must. It is the equivalent of the #include command I mentioned earlier, as it allows for development of sub-components independent of the larger project. However, simply assembling these higher order designs is somewhat constrained in utility without the ability to test the system as a whole. This is where Notion comes into play.

Rather than be a monolithic simulation program such as those used by most CAD/CAE systems, Notion is far more akin to a rendering engine framework such as those found in professional 3D animation packages. The key thought behind this is flexibility. Different users will want to test for different things, and different projects will necessitate very different kinds of tests. For example, the designer of a airplane may not care much about simulated crowd-flow within a structure, but it is a essential consideration for the designer of a subway station. Conversely, the designer of a subway station probably cares little about hydrodynamic flow, whereas it is absolutely crucial for a airplane's design. This is a simplistic example, but the point is that forcing a single set of simulation tools not only limiting to users, it also limits the applications of Davinci. Notion would act as the ambassador between simulation engine modules and Scrawl/Workshop. A user looking to test a design would select the elements to test in either Scrawl or Workshop, and then specify the test and simulation engine to use. Notion would then glean the needed data/metadata from the selected elements and feed the information to the simulation engine. The output from the engine would be fed back through Notion and displayed within the interface of the originating program. Notion's functionality would not stop there however. For Workshop files, Notion would allow for multi-level simulation. This would allow for tests to be performed not just on static proxies of sub-components, but on a level of to-the-part accuracy. While far more computationally expensive, it would allow for the capturing of “gotcha” mistakes, such as a unintended weight shifts due to a sub-components movement or a unexpected loss in performance due to cross-component heat pollution. The aim of such functionality is to come as close as possible to a fully realistic virtual prototype. While perfection of such a prototype is probably out of the reach of any system in the near future, something close could be attainable.

With all these components and simulations creating piles of data and metadata, some sort of organizational system would be critical for any serious collaboration. This is where Plaza becomes crucial. Plaza would be a server platform consisting of several different services. The most critical service would be a specially-designed SVN system would intuitively and securely archive the data generated by the innovation process. A second service in Plaza would allow it to act as a abstracted simulation module for Notion. This service would allow for Notion to leverage large clusters of connected servers for especially complex simulations. To facilitate real-time collaboration on a single file, a service based on the Uni-Verse code-base would also run on Plaza. This would allow multiple contributors to collaboratively work on a unified design simultaneously, a important feature for when designs get above a certain level of complexity. One final service would be a API allowing third-party applications to securely access the data stored in the repositories. This will allow developers to expand upon the family of applications that can leverage Davinci. Such applications might include statistical comparison software for comparing the technical merits of different design variations, or a virtual reality walk-through of designs. The possibilities are endless, which is why creating a robust and flexible API would be so crucial.

I think I've rambled enough on this system for now. This was meant to be first and foremost a conversation starter, so I look forward to you thoughts. I most certainly lack much of the technical expertise it would require to build such a system, which is why I would very much like to see such a system developed as open source. There's something poetic about open source being the key to the creation of open source hardware. In short, if this has sparked a interest in you, feel free to adopt the concept and dive into fleshing this out.


Continue Reading

0
Davinci: Open Source Engineering Tools (Part 1)

Tonight I want to talk with you about something I have struggled with for some time to figure out. This idea has been with me for close to if not over a year, and yet until recently I was finding it very hard to describe to others. I even posted about it once in my old ideas blog, but even as I wrote that version I felt frustrated by my lack of clarity on the topic. This is my second opportunity to do it justice.

I am, as you may have gathered, a advocate of tools. While it is becoming clearer by the day that humans are not the only animals with the ability to conceive and utilize tools to achieve our aims, it is one of our defining characteristics which has allowed us to thrive as a species. This is one of the reasons I am such a rabid advocate of open source. It calls on the better angels of human nature to facilitate collaboration in the search of better tools, accessible to all. Yet despite all of the wonders open source has provided the world of software, I would hazard that I am one of many who feel that open source must expand beyond the realm of software to deliver its best gifts to humanity. Open source must breach the divide and become a tool for the innovation of corporeal inventions. Several attempts at this have already been made, or are under way. However, their dream will never reach full fruition without confronting a basic reality: Advocates of open source hardware lack the equivalent tools that their software compatriots take for granted. Without these tools, open source hardware cannot achieve the same success that open source software has enjoyed.

When a coder sits down at a computer to write a program, they have all the tools necessary for the act of creation and collaboration at their fingertips. Code can be written in a free text editor or software editing application. That code can be compiled, for free, by a compiler residing on the very same computer. The coder can test the fruits of their labor for free and in most cases, without fear of harming themselves, their computer, or their work. Now to be fair all of this functionality can be mimicked by a engineer, utilizing a CAD/CAE program. A engineer can design a piece of machinery and test its basic functionality, safely and (depending on the software) fairly cheaply. In this regard, the two systems are relatively similar. However, the differences begin to emerge when collaboration, a essential ingredient of any open source project, comes into the picture.

The power of open source derives from the ability of a individual coder to contribute a relatively small amount of work which can easily be merged into a larger, more complex project. The core of open source is the acknowledgment that not everyone is Superman, and that many people contributing just a little can add up to something greater than its part. A coder working on a open source project can easily download all or part of the larger project, make changes, compile the entire project and test it. Adding the work of others to a existing program is also relatively painless, requiring only a few lines of code to instruct the program how to access the new code and to call its functions when needed. Indeed, the command #include and its kin are one of the most powerful commands from the perspective of open source. They are powerful because they allow for a single coder to quickly add the work of another coder, making collaboration not only easy, but in many cases easier than working completely alone. This is where the design of real things runs into trouble. A team of committed, determined engineers looking to create a large open source design, they will quickly realize that while they can all design individual pieces and test them all individually, there is practically no way to test the entire system without building a physical prototype and testing its performance. While this might be a acceptable solution for something small, simple, and cheap to build, it becomes a serious problem for larger projects. The average contributor is most likely a person of modest means, and probably could not afford to build a functioning prototype of something large, like a car, building, satellite, or playground. There should be a tool that empowers the average contributor to the same level that a simple compiler empowers a coder. Such a tool should allow a contributor to build, edit, test and share large complex projects. That is what I shall attempt to describe.

Initially, I envisioned such a tool as a monolithic piece of software. This one program would handle all functionality, from the design of individual elements, all the way to the testing of large complexity projects. I based this initial notion off the analogy of a software development environment, where code writing, project organization, and testing functionality were all part of the same program. Understandably, this system became very hard to describe, as I tried to describe a family of functionality while retaining the notion of a singular program. It wasn't until recently that I realized what I was really looking for was a close-knit ecosystem of smaller, function specific programs. Once broken down into functions, the system is suddenly much easier to conceptualize, and hopefully easier to describe.

Imagine my embarrassment...

Continued in Part 2


Continue Reading

Wednesday, August 19, 2009

0
KISS

Any of you who are frequent readers (which isn't many of you, judging by the allmighty Analytics) may have noticed the ever-changing look of the blog. For a while I got kind of fussy over its appearance and bit off more than I could chew. The end result was a fugly template with borked code and a frustrated blogger. So, I've returned to a basic little default template. Is it fancy looking? Not really, no. Does it allow people to actually read the blog without seeing template vomit? Yes!

Simplicity, what a concept.

Continue Reading

0
Let the Music Move You

Music and Storytelling have always been deeply intertwined, so it should come as no surprise that music can be a great asset to both the creative writer and conceptual artist. Here's a couple techniques I've found useful when looking for inspiration.

The Character Playlist:
Whether you are creating a character for film, comics, animation, or just plain prose, understanding the ins and outs of their personality is crucial. Sometimes attempting to craft a personality out of whole cloth can be somewhat intimidating, especially if the character is a complex one with conflicting motives and desires. This is a trick that will help root out those complex feelings and brainstorm potential plot directions. Its actually ridiculously simple: Create a musical playlist for the character in question. I know it sounds odd and borderline obsessive, but it works. Now when I say make a playlist for them , I don't mean music that they would listen to. Rather, select a group of songs that embodies the characters essence. It helps to have a eclectic taste in music, although it is possible for a character to reside completely in a single genre. Additionally, avoid the temptation to make all the music about the same feeling/aspect. The goal is to create a complex, interesting character, which generally means they have more than one aspect or emotion. If you're getting stuck, start thinking about what the character thinks about other characters or events in the story. Think about their subconscious desires, what do they really need? You can go even further and make several playlists for the beginning and end states of the character. How has the character grown/changed? Remember, the song lyrics don't have to mirror everything about the character, what's important is the feelings it imparts and the bits of the song which stick out at you as resonating with the character. Pandora can be a great tool for this, because it can give you unexpected suggestions, which may give you a stroke of insight.

The Album Concept:
This is a trick for coming up with a concept based off of a whole album. While I use the example of a full world or setting, it could just as easily be used for developing any concept. The idea behind this is that a album is generally a complete set of musical thoughts laid out by the musician. By listening to the album repeatedly and iteratively building your concept from the experiece, you can craft a concept that is personal and moving. Find a full album of music that you can tolerate listening to several times. Ideally, it should be a genre with the general overall mood you wish to convey with the concept. Get yourself set up in a comfy spot with a notepad, and listen to the whole album. In the case of classical music, sometimes its better just to use a complete suite instead of a album, the point is to listen to a full musical thought. As you listen, close your eyes and begin to visualize the world the music evokes to you. When you have a idea, jot it down along with a track title or name. Try to listen through the whole album, writing down notes and refining your vision. Once you're done, take a break. Let the ideas you've thought up sit and marinate in your head. Come back later and sit down again with your notes and a medium of your choice. If you're writing, use another piece of notepaper. Visual concepts should be done on a sketchpad, with a sketching implement of choice. Personally, I find it useful to use a ball point pen for rough sketching, because it removes the temptation to erase a "young" idea. Listen to the album again, keeping your original ideas in mind. This time begin to capture specific imagery. Once you're done, take a break again. Repeat this process a few more times until you're sure about your concept. At this point you can continue to develop it without the music. You can always come back and listen to specific songs as a reminder of certain aspects. Great free resources for this are Jamendo and Magnatune. Both allow free listening of full-length albums in a wide variety of genres.

Hopefully you'll find these techniques useful. Just remember, these are tools, not rules. Feel free to explore your own methods of exploration and inspiration. Additionally, if you create anything cool using these techniques and feel like sharing, please do!

Continue Reading

Monday, August 17, 2009

0
(Virtual) World Domination Pointers

Alright I've got a serious bee in my bonnet, so please excuse the bluntness. I've been sitting on some of these ideas for far too long. These are some ideas for projects of business ventures in/for the Metaverse. The ones that interest me after this posting might get a more fleshed out dedicated post. Feel free to do whatever you will with these.

SimpleViewer: A Beginner-Friendly Metaverse Viewer
Please, for the love of the FSM, somebody do this. It boggles my mind that despite the myriad of development work being done across the OS viewer scene, all of the development work has only made viewers more complex and feature heavy. If we want the Metaverse or SL to really attract more than the hardcores, we can't keep up like this. Its like giving a industrial-grade metal smelter to a kid who just wanted a EZBake oven. Beginning users don't need or want a lot of power or fancy functionality, they just want to be able to log in and interact with the world without wrestling with cluttered menus and counterintuitive interfaces. Please, someone take Hippo and strip it down to its chasis. Then design a easy, intiutive user interface that won't make granny want to put her fist through the computer. The world will thank you.

MetaJobs: A Job Board for Metaverse Workers
Can somebody please answer me why the Metaverse doesn't have a unified job board? No, XStreetSL's employment section of their forums don't count. I'm talking about something meant to serve both SL and the Opensims. Content isn't quite portable yet, but talent sure is. Anyone who's searched for a job in SL knows what a goosechase that is. Making a one-stop web-shop for all things employment would be a big boon for both potential employees AND employers. It would be pretty easy. So easy it could be done with a PHPbb and a domain name. Yes it could and should be more intricate than that, but its really all you would need to start. I have a hard time believing theres no one out there who doesn't see the money potential.

Convoy: A Secure Marketplace for Pan-Metaverse Commerce
So I just said that content isn't portable between grids yet. That's not completely true, but for the majority of normal Metaverse users its the reality. The truth is the functionality to import/export conent exists. What's missing is the system of trust between content makers and grid operators to allow people to buy things in one grid and move with them to another. This creates a giant mess. So what if a company stepped in with a XStreetSL style marketplace, and a dedication to absolute security? Content creators, trusting the market to not sell to unsafe grids, upload their content to the market and put it on sale. The company accepts applications from grids to gain delivery access to the grid. The company verifies they are running a content-safe server code and enters into a contract with the grid to deliver goods to the grid so long as the server remains kosher. Once verified as safe, customers from that grid can buy from the Marketplace and have the purchased content delivered to their account on their grid. The company could make money any number of ways, including delivery charges, listing upgrades, and other services. Just remember the company's number one service is the trusted bridge it provides for content creaators and grid owners.

I'm sure I have other ideas I've spaced on. I'll write about them later.

Continue Reading

0
Destroy Perfectionism

Ok, so what was supposed to be a few days break turned into something a lot longer. I'm not going to cry to you about how I'm sorry, and how I'll do a better job in the future. If you've read more than this post you'll see that I've gone down that road before. Instead, I'm going to accept the fact that I suck at keeping a regular blogging schedule. What made me stop last time was trying to make myself look like something I'm not yet, and then getting intimidated by the expectations I placed on myself. To Hell with that.
Continue Reading

Tuesday, July 14, 2009

0
SpaceX and the Race to Credibility

Yesterday SpaceX, Elon Musk's private space company successfully launched a Falcon 1 rocket, carrying a observational satellite into orbit for the Malaysian government. While private rocket launches are nothing new, the launch was a critical one for SpaceX. This is only the second Falcon 1 rocket that has successfully launched to orbit of the five which have ever lit the candle. Flight 3 nearly made it to orbit, but due to a minor miscalculation the rocket collided with itself mid-separation, causing it to tumble off course. Getting to space is after all, rocket science.

While getting to orbit in only four launches is impressive for a startup, the rate of failure leading up to that launch had spectators and potential customers nervous. When you are looking forking over at least seven figures per launch, a 25% success rate isn't exactly the kind of thing which inspires confidence. That's why this latest launch was so important for SpaceX. By successfully launching to orbit with a paying payload, the Falcon 1 now stands with a 40% success rate and a satisfied paying customer in its resume. For a space launch company of any size, this success rate is vital, because it is a direct manifestation of the companies credibility. It is doubly vital for a young company like SpaceX, which has set lofty goals for itself and has attracted high profile contracts such as servicing the International Space Station.

In order to meet those goals, SpaceX needs to prove that it is capable of consistent reliable launches. SpaceX needs more launches like the one yesterday.


Continue Reading

Monday, July 13, 2009

0
Please excuse the Debris

Apologies for the code vomit above. I'm not sure precisely what I did, but I'm sure its my fault. On the positive side of things, the blog now has expandable article views!
Continue Reading

Saturday, July 11, 2009

0
Art School on the Cheap: Professional Skills Without the Pricetag

Before I get started in on this I want to be very clear: I'm not trying to sell you on any of these products. Rather I'm recommending them as someone who went to art school and came out finding my education lacking. These were some resources which helped me fill in some of the gaps in my knowledge.

Its no secret that art school is expensive. In fact, I would wager that cost is the number one limiting factor preventing young artists from pursuing their passion after they graduate high school. This is tragic, especially because of all the great resources which are now available for those interested in getting a art education without having to shell out the obscene amounts of dosh previously required. Here I'll try to list out some of the ones I've found useful, along with their rough price ranges. Its worth keeping in mind that I'm approaching this from the perspective of someone in the animation and game art industry, so my tips may be skewed.

Cartoon Smart:
CartoonSmart.com is a great resource for those looking to get started in Web animation, as well as a number of other subjects.  The site offers affordable video tutorials on a wide range of topics.  The tutorials are friendly, accessible, and packed with information, making this a resource well worth the money.

Gnomon:
The ten-ton gorilla of the Internet art education world, Gnomon is a education resource for professional artists by professional artists. Originally just a series of professional classes held in Hollywood for visual effects and concept artists, Gnomon has expanded out to be come a multi-headed hydra offering a wide range of products and classes. The Gnomon Workshops are the bread and butter of their offerings, giving several hours of professional tutorials in the $60-80 range. Now that might seem steep, but when you consider these are in-depth tutorials from some of the industry leaders, the price is worth it. If that's to rich for your blood, Gnomon also offers Gnomonology which provides shorter format tutorial nuggets in the $15 range. For more intensive training at a steeper price tag, Gnomon also offers multi-week online classes for the tune of $1K-$2K at Gnomon Online. While definitely one of the priciest resources, Gnomon is definitely professional grade training presented in a great ala-carte style.

Online Art Communities:
By far one of the cheapest ways to learn is to get in the habit of posting to art communities. Basic accounts are almost always free, and they offer aspiring artists the chance to share their work and get critiques. DeviantArt.com is by far one of the most popular, with a wide ranging user base across all skill levels making it a ideal place for beginners.  However, for those serious about getting honest, constructive criticism that will help them grow, its best to start looking to more professionally oriented communities.  ConceptArt.org is one such site, with a active and highly knowledgeable community dedicated to helping each other perfect their skills.  

Books:
There are an amazing number of great instructional books that can help teach you various aspects of art.  Despite what you might assume, none of them have ever been written by Chris Hart.  Here is a list of books that I do recommend (I own them) for the learning artist:

Launching the Imagination by Mary Stewart
The Vilppu Drawing Manual by Glenn Vilppu
Composition Photo Workshop by Blue Fier
Vanishing Point by John Cheeseman-Meyer (yes that's really his name)
Anatomy for the Artist by Sarah Simblet
Atlas of Human Anatomy for the Artist by Stephen Peck
Force by Michael D. Mattesi
Bold Visions by Gary Tonge
Digital Character Design and Painting by Don Seegmiller
Creating Characters with Personailty bt Tom Bancroft
Facial Expressions by Mark Simon
The Dynamic drawing series by Burne Hogarth
The Virtual Pose series by someone, I don't know who.

ImagineFX Magazine:
I could have included this in with the books, however it's a magazine and just too awesome to be lumped in with everything else.  ImagineFX is a art magazine printed in the UK, but is readily available in US bookstores in the computer/art section.   It offers great articles with science fiction and fantasy artists, as well as providing excellent art tutorials and product reviews.  Normally I have very little patience for magazines, but this is one of the few exceptions I have to it.  Each issue comes with a disc packed with freebies and video tutorials, making the magazine well worth its slightly elevated price tag.

Graphics Tablet:
A graphics tablet is a indispensable part of a modern digital artists arsenal.  It allows you to draw straight into your favorite graphics program of choice without the hassle of trying to use a mouse.   Even artists more interested in traditional art should at least look into dabbling in digital art, as it allows you to freely experiment and try out ideas without the material investment normally required. Yes, I know it seems like a big commitment, but at today's prices for a entry-level tablet, it should be well within the means of most young artists.

Scanner or Digital Camera:
Like the tablet, a scanner or digital camera can go a long way.  Not only does it allow you to save and share your artwork, it allows you to non-destructively experiment with it digitally, and even use it as the basis for amazing pieces of digital art. While a scanner is preferable for capturing a high quality digital version of your art, it can be a big investment for young artists.  A digital camera can be a good substitute, although its always best to take pictures of your artwork in a well-lit area and at as high a quality as possible.  Digital cameras also allow you to capture your own reference images, which is a added bonus.

The Internet:
I'm not going to even try to start listing out the thousands of awesome resources available to artists for free today on the Net.  There's too many to do justice to, and frankly half the fun is finding them for yourself.  There are endless blogs providing tutorials, stock photography sites offering free reference material, and any number of specialized communities (it's the Internet, after all).  So go and explore!

That pretty much wraps it up.  If you have anything to add to this, please feel free to add a comment.

Continue Reading

Wednesday, July 8, 2009

0
Google and the Soft Monopoly

Google is taking over the world, or at least the Internet. We've suspected this for a while, so it really shouldn't come as a shock. Yet unlike previous companies which have tried to do this, such as Googles nemesis Microsoft, we don't seem too concerned about it. Quite to the contrary, we tend to welcome Googles juggernaut rampage to conquer every segment of functionality on the Net.

Take Googles announcement today that it will be releasing a free operating system based on its Chrome Internet browser some time in 2010. This is from a company which already has established itself as the dominant player in searches, email, maps, video, advertising, and is considered a strong contender in fields such as group collaboration, phone services, and even mobile computing. This is the same Google which is sponsoring a race to the Moon and regularly invests millions of dollars a year in a broad spectrum of startups. The same Google which recently introduced the concept of Google Wave, which may have the ability to revolutionize communications in a way Twitter can only dream about. Google isn't just a 500 pound gorilla on the Internet, it is THE 500 pound gorilla. Now, had any other company with this kind of omnipresence announced its intention to expand into yet another market, there would be panic and fleeing of the populace to the hills. Instead, the overwhelming consensus seems to be one of eager anticipation, bordering on celebration. So why does Google get treated so differently?

My guess, for what it's worth, is that Google has mastered the art of the Soft Monopoly. A Soft Monopoly is everywhere because people genuinely want it everywhere. It's helpful, useful, and generous, and doesn't go throwing its weight around to get what it wants. Instead, it convinces everyone else that they want what it wants because what Google wants makes things better for everyone. With only a few exceptions, everything Google has tackled has made the Internet a better place for everyone, which wins you a lot of friends. With such populist support, how could a Soft Monopoly do anything but expand? Whereas Microsoft and Apple have consistently tried to dictate what their customers could and could not do, Google has taken the opposite path. Their logic, I suspect is that the more seamlessly and effortlessly Google can fit itself into the experience its customers dictate, the more positively Google will be viewed. For Google this means offering a lot of high quality services for free. In doing so they become a ubiquitous part of the fabric of the Net, so when you need a professional solution, the answer comes naturally.

So should we be surprised Google is creating a free open source operating system? No, it's a natural behavior for a Soft Monopoly like Google. By leaping into another market with a free disruptive product, Google expands their potential to be helpful in all sorts of new ways. For a Soft Monopoly, helpfulness is directly related to future profit, so Google is right on its game plan. Right now Google seems content to constrain its monopoly to the realm of bits and bytes. So until your Google Groceries get delivered to your Google Apartment door by a Google Grocerybot, I think the world is safe for the time being.

Continue Reading

Thursday, July 2, 2009

0
Upcoming Posts

I've come to two realizations over the past week. One: I love writing articles, and two: I have more things to write about than time currently allows. So therefore, I'm going to just jot down a bunch of the topics for posts I plan to write and release soon.
  • Tools for open source invention of real world things
  • Community vs. Content on the Hypergrid
  • How to capitalize on the lack of content in the open Metaverse
  • Modular systems for third world empowerment and infrastructure
  • How computer vision will revolutionize machinima
  • Can radio be crowd sourced?
  • Funding the Metaverse explosion
  • Bringing the talent from SL to the open Metaverse
  • Rapid game design prototyping system
  • The Encyclopedia of Stuff
  • Mapping material flows and artificial metabolisms
  • Citizen journalism through social multimedia mash up tools
  • Planning infrastructure through modular evolution
  • Hybrids in the skies
  • Air deployable emergency aid systems
  • Meshing social media and consulting
  • Can flash mob tactics be applied to the freelance job market?
For now that's all of the ones I can think keep track of that I need to write about. Looks like I need to get cracking!

Continue Reading

Tuesday, June 30, 2009

0
A Virtual World like Polka-dot Flypaper

Recently, the popular Second Life blogger Hamlet Au made a post raising a point of contention against Chris Abraham's remarks unfavorably highlighting the differences between SL and Twitter.  The specific point Hamlet took issue with went roughly as follows. Twitter is not like Second Life in and of the fact that it is "cheap, light, and open."  In essence Chris is pointing out several truths regarding the major differences between Twitter and Second Life.  It is true that Twitter is cheap, its free.  It's also true that twitter is light on system resources in comparison to SL, one is a text-based app, the other is a dynamic real time 3D environment.  Additionally it is also true that in relative terms Twitter is far more open than SL.  The potency of Twitters' API has been on prominent display throughout the #Iranelection crisis.  SL has a long way to go before it can boast such a promiscuous architecture.   

In response, Hamlet listed off a series of advantages SL has over Twitter.  To his mind SL has the advantages of being "unique, sticky, and profitable."  Unique in the fact that there is no real direct competitors to SL, aside from the Opensim grids.  Sticky in how much time users spend using the different services.  For this Hamlet points to statistics showing the huge lead in minutes per month spent on-platform SL enjoys over Twitter.  The last point, profitability is almost a gimme, in and of the fact that Twitter is completely funded by venture capital whereas Linden Labs makes a tidy profit off leasing land to residents.  

For the full post, you can read it here.  Honestly I'm not going to take sides on this one because while I view the argument as rather immature and the comparisons a bit sketchy, the truth is that both of them are right on all points.  What both of them are doing amounts to two blind men feeling up different sides of a elephant.  In this case lets call that elephant The Killer Social Platform.  In order to be effective and pervasive, this platform would have to be open, light, cheap, and unique.  In order to be sustainable, it would have to be sticky and profitable.  In more whimsical terms, the Killer Social Platform has got to be like polka-dot flypaper.  It has to be cheap and pervasive enough that its everywhere you want it to be, and compelling and unique enough to keep users hooked once they've tried it.  

Maybe the solution isn't text based like Twitter, or 3D like SL. Maybe it resides in the middle.  Perhaps the solution is a open, hostable 2D virtual world that utilizes ubiquitous lightweight technology like Flash.  It might be isometric like Metaplace, but my hunch is that a side scrolling style world would spread far faster, as its far easier to create content from a side view.  To be fair to Metaplace, they do a excellent job simplifying the task of bringing content reliably into a isometric world, something I applaud.  Flash is lightweight enough and ubiquitous enough that it could be embedded practically anywhere, and website embedding could allow non-users a window into the world prior to actually signing up to participate.  This is one of Twitters' strong points, as anyone can follow a conversation without signing up, effectively selling the receptive to the platforms' utility.  With persistent hosted environments comes the opportunity for profit, whether through hosting fees as SL does or through advertising. 

Additional profit can come from the sale of additional functionality, or the capacity to handle that functionality. Ownable persistent virtual property has definite inherent stickiness to it, especially when coupled with strong traditional social networking services such as groups, events, and content sharing.   The visual, location based aspect of SL coupled with its ability to allow the user to participate simultaneously in larger discussions is a large contributing factor to its stickiness.   In regards to openness, that is genuinely a structural business decision that must be made early in a product's development.  

The degree of that openness is debatable, as some will lean towards a completely open virtual framework for anyone to host, while others will inevitably prefer the hosted API route of Twitter.  Either way, the current trends point toward openness as a key element to widespread adoption.  Now we're left with uniqueness.  The fact of the matter is that a system that can pull all of the other elements off, and do it well, will be unique.  Sure, there may be competition, everyone believes they can invent a better wheel. Competition need not be viewed as a bad thing though. Historically competition has always been a excellent motivator for innovation.  

I'll give the topic a rest for now, mostly due to the fact I am beginning to lose my train of thought in a haze of sleep deprivation.  I'll probably come back and describe this platform some more in the future, gotta tease this string out to see what unravels.

Continue Reading

Monday, June 29, 2009

0
Sketchbook Dump Highlight Post




If you follow me on Deviantart, you'll know that I recently did a two part marathon sketchbook dump that ended with nearly twenty new pictures being uploaded.  From this I learned two things. One, I need a dedicated flatbed scanner, because taking snapshots of each sketch just doesn't cut it when it comes to quality.  Two, I have a weird obsession with drawing heads.  Maybe this has to do with the fact I'm insecure about my ability to effectively draw faces, so in response I just draw a lot of them.  Instead of spamming you with all the pictures, I'll give you a sampling of my personal favorites from the batch.  To see all of them, feel free to check out my deviant page.


A fun little cartoon beastie I did while 
experimenting with combinations of art styles.
This was an attempt to merge art nouveau and 
hispanic chibi (if that's even a style)


Not everything was cute, however.  These
are a couple of pen sketch concepts of some
generic cyberpunk/horror characters.



One of my absolute favorites from 
the set was also the last I uploaded.  
I think I have a thing for biomech vehicles.
Whether I'm any good at drawing them 
is a completely different matter.

Hope you take a look at the full dump, and please feel free to let me know what you think!  This artist loves feedback.

Continue Reading

0
Joomla for the Metaverse

I recently posted this question on Twitter and Plurk: if Opensim is the Apache of the Metaverse, then what will be the Joomla of the Metaverse?  Clearly I need to work on my grammar when tweeting/plurking, but I was being serious.  This isn't a idle or rhetorical question, its something which has to be answered if Metaverse solutions like Opensim are ever going to truly become the next incarnation of the Web.  

While it is true that the Web grew through its infancy without much in the way of content management software, it has become a essential ingredient to its continued growth.  Even in its early years, the growth of the web was spurred by programs and services like Geocities and Dreamweaver. These served to sufficiently lower the bar of entry into the world of web design to allow those with little or no technical knowledge quickly create a web presence.  This created a primordial sea of amateur designers and content from which the committed and skilled graduated to create bigger and better things.  

Right now the Metaverse scene is currently in the AOL stage, with a few centralized services offering walled garden experiences wherein they hold the monopoly on content and users.  A few pioneers (OSGrid, ReactionGrid, Openlife) cling to the fringes, boldly attempting to create more open alternatives.  However on their own they cannot compete on the scales of content and user population that the walled gardens offer.  This is why the developments in cross grid transportation in Opensim are so widely hailed, because in many ways they are the equivalent to the most powerful element of the Web: the hyperlink.  Once a critical mass of grids become connected and standards arise for commerce and content sharing, this new web of grids will begin to eclipse the walled garden providers.  But this critical mass must exist.

This is where the question of content management becomes so crucial.  Without tools and services to lower the barrier of entry, the growth of the open Metaverse will be stunted.    The open Metaverse will need its Geocities, its Dreamweaver, and its Joomla.  The barrier of entry must be lowered, which means making the deployment and management of entire grids simple, pain-free, and dirt cheap.  When the barrier of entry is lowered, it will open the gates to a stampede of amateur Metaverse creators seeking their slice of the pie.  Now to be clear, most of these amateurs will not create great virtual environments.  Just like the web now, most of the content will be mediocre. 

A Metaverse Geocities will probably offer free hosting for simple one-sim grids, with a simple template-based creation tools.  Revenue will most likely be provided in much the same way Geocities provided it, embedded ads for free accounts, or a monthly cost for those who chose to upgrade.  An astute observer will note some similarities between this and grid services currently provided, sans the advertising.  One of the primary differences will again be in the linking, as users of such a service will occupy a grid of their own, populated only with their own sims and any hypergrid sims that they should choose to link to.  

A Metaverse Dreamweaver will most likely provide a world creation tool set based heavily off of the tool sets currently used by MMOG developers to layout large environments.  When augmented with a marketplace function (much like a pan-grid XStreetSL perhaps), it will facilitate a drag-and-drop style of world creation.  Much in the same way Dreamweaver simplified complex web functionality into a WYSIWYG interface, its Metaverse cousin must do as well.  Commonly occurring complex scripted objects would be abstracted into click-and-place entities.  Any fine tuning of these worlds would most likely be performed in-world, the Metaverse equivalent to hand-rolling HTML and Javascript. 

Finally, a Metaverse Joomla would abstract and simplify the maintenance of large, complex grids in much the same way Joomla simplifies and abstracts the maintenance of large complex websites:  by providing high-level editorial, publishing, and management tools.  In the case of the Metaverse variant, this would entail on-the-fly creation of template based sims, the ability to monitor real-time activity, permissions based administrative controls, and the ability to extend the grid to include third-party functionality and themes.

I cannot begin to stress how important these services will be to the growth of a open Metaverse. I've listed these three comparisons in this order for a reason.  It is most likely the order in which they must occur.  Right now the major battles of cross-grid content transfer loom just over the the horizon, as the first transfer mechanics are just now being created.   While that battle rages, a Metaverse Geocities can be introducing the world at large to the idea of being a virtual world maker.  Inevitably some compromise will be reached, and a Metaverse Dreamweaver would capitalize from whatever content marketplace system emerges.  As these worlds grow and mature, and the need for simple dynamic management of large scale grids emerges, a Metaverse Joomla can fill the need.

So the question remains: if Opensim is the Apache of the Metaverse, what will be its Joomla?

Continue Reading

0
Quick thoughts on motivation in MMO design

I wanted to take a quick break from my talks on technology and more serious stuff to share with you a couple thoughts I've just had regarding social dynamics in MMOs, and online games in general. I hit upon them reading this article from WebWorkerDaily, which mulls the sociological and psychological consequences of looking for a job in a crowded job market. The basic thrust of the article is that as the number of potential candidates for a freelance position increases, an individual's confidence in getting that position drops dramatically. This immediately reminded me of the psychological effect know as the "Diffusion of Responsibility". This effect is tragically exemplified by the murder of Kitty Genovese, a young women who was stabbed to death on a New York street.

What makes her case noteworthy and tragic is the fact that there were dozens of people who either witnessed the murder or heard it occuring, and yet nobody took the initiative to save her or even call the police. In both cases we see a parallel behavioral trait emerging. In situations where there is a clear goal which requires a active commitment, a persons' willingness to engage is negatively impacted by the size of the crowd present. This is a important effect that designers of online games must take into consideration when crafting both collaborative and competitive scenarios. This may be one of the reasons why random matchmaking is so effective at spurring competition. By limiting the scope of competitors any one player will encounter in a play session, the concept of winning becomes psychologically acceptable.

Collaboration is a much trickier system to balance, but the basic principle still holds true. Smaller teams tend to be far better at organizing to the task at hand than larger ones. This would suggest that systems which allow for high levels of stratification in leadership would be the most successful at promoting collaborative play. The trick comes in the assignment and enforcment of tasks, something which often is a stumbling block for many systems. For instance, Planetside utilizes a highly stratified leadership structure with player groups consisting of squads, platoons, outfits, and armies in order of ascending permenance and scope. Often however, there is a serious lack of organization and discipline within these rather stratified systems because there is very little by way of carrots and sticks to encourage unit cohesion. A top level "general" can demand focus be given to a certain region until they're blue in the face, but without any adequate method of incentivizing that focus, their orders come across far more as a suggestion than a imperitive.

Another factor worth considering is the perceived cost of acting. In most online games this cost is time. If a player perceives a activity with a high desirability but a large time commitment, and another activity with a moderate or low desirability but with little or no time commitment, a large portion will inevitably go with the latter option. From a competitive side, this is one of the big drawbacks to random matchmaking, as it requires all the participants to wait long enough for a match to fill enough slots to be worthwhile. From a collaborative end, it means that when given a choice regarding several available collaborative tasks that will help the team, the trend will be for players to choose the action with the least time cost, even if it is detrimental to a teams' overall chances of success. In the case of Planetside, there very well may be incentives in place, but if it is not immediately and inutitively accesible to the commanding players in a way that encourages its use, then inevitably those palyers will take the route which requires less time and simply rage on the chat channel.

So what are the lessons here?
  1. Keep group sizes small to keep competiton and collaboration fresh and personal
  2. Create stratified levels of leadership to prevent task overload for any one player or role
  3. Be mindful of the impact of large groups and how they can effect competition and collaboration
  4. Ensure that the cost of actions, be it real or perceived is balanced in a way that promotes the kind of behavior that is desired.
  5. Create leadership functions which allow for painless task assignment and role enforcement
Ok, enough with my ramblings. Back to building characters in Flash!

Continue Reading

Saturday, June 27, 2009

2
Sourcery: Transparency for Stuff

Today I want to talk to you about stuff. Real stuff, products you buy and use every day in life. In specific I want to talk about where it comes from and how its made. Let's start this off with a little exercise.

Look around you and identify/pick up something in your proximity. In my case it's a wireless USB mouse. If it's name-brand, then you should be able to identify the company that made it with relative ease. My mouse is made by Logitech, a company whose products I've traditionally trusted but recently I've become frustrated with. Now for some this name-brand identification is enough, but let's dig deeper. If your object contains more than one piece, its fair to say that someone made those pieces. Its also fair to assume that in a complex object like a wireless mouse, that the pieces were made by different companies, using different materials and methods. In turn the materials used to make these components While we can know this truth, for most this is where our knowledge ends. for instance, I don't know who made the plastic casing on my mouse, or even what its made out of. Similarly, I don't know anything about the methods used to make these pieces, what company made them, and what their practices as a company might be. For all I know, I'm using a mouse that is powered by the soul of a baby seal clubbed over the head with a rod of plutonium. While I certainly hope this is hyperbole, it illustrates my point.

Shouldn't we, in a world of connected and relational data, be able to easily discover what is in our products so that we can buy intelligently? As a informed and concerned consumer I would love to be able to know with relative confidence where everything in my mouse has come from, whether it contains any materials I would rather avoid, and that it is indeed radioactive-baby-seal free. For granola-munching progressives like myself there are obvious advantages to this system, as it allows people to truly see what goes into the making of the things they buy should they chose to do so. However this kind of system would be of utility of anyone concerned about the how, what, and where. For example, informed parents would be far less likely to buy a product for their child if they knew that one of the manufacturers of one of the components used a chemical in the process that has been clinically proven to negatively impact children. Someone with hometown pride may favor a one product over another if they knew that it contained a component made in their home town. This concept may seem like a bit of a stretch, but its worth noting that there are several forward thinking companies which have already begun self-reporting on their sourcing. Their logic is that by being transparent to their customers, they not only hold themselves accountable as to what goes into their products, but also establish a trusted relationship with customers who will in turn reward them with brand loyalty.

So what would this system (lets call it Sourcery, because it makes me feel clever) look like Clearly this is too much data for one company to maintain with any semblence of agility or accuracy. A single edition of such a report would take years to compile and would be woefully out of date from its release date onward. So instead I look towards the Wikipedia model. Like Wikipedia, Sourcery has to be agile and decentralized, self-editing, and contain a tolerable level of credibility. Entries in Sourcery would be divided into three categories: products, companies, and materials. Product entries would list materials used in their manufacture, other sub-component products, and the company of manufacture. Company entries would contain a list of products produced by the company. Material entries would contain basic information on the properties of the material, and a description of how the material is created. Sourcery should be open enough that anyone willing to contribute knowledge can, but also allow companies the ability to officially confirm and validate the information contained on entries pertaining to their products. Herein lies the largest challenge of the entire system. Individuals should have the power to openly participate, as with Wikipedia, yet in order to be fair the system must allow companies to differentiate between factual information and baseless claims. At the same time, companies must be limited in their power to censure information, otherwise the system completely loses its utility. Perhaps the method lies in differentiating the powers of users and company representatives. Anyone can sign up to edit the articles, thus allowing the company the same level of access as everyone else. Companies can sign up for a official representative account, which adds the power to tag various information as officially confirmed or contested by the company. The official account would not allow for the deletion or wholesale censoring of information, but would instead simply allow for the company to add its position to any claims contained in regards to their products. Occasionally a company will want to protect its intellectual property, especially when it comes to proprietary materials. They should be allowed to do this, but with the understanding that the more which is withheld, the less inherent trust will be placed in the product.

Another critical component of Sourcery should be the linking of materials and companies to related data. For example, I should be able to see clinical studies related to a material, or a report on human rights conditions at a company. This is where the true usefulness of Sourcery would really reside. Without it, I may know a product contains a certain material, but without the context that this is something I may want to avoid the system loses its meaning. With it, it creates a climate where through disclosure, companies can be petitioned to make their products safer/better/more sustainable. Companies that would fight against such a change would inherently be at a disadvantage when competing against companies with less qualms about being honest with their customers. A feature that would certainly be a technical challenge to create, but a boon to utility would be ability to extract high level summaries. The feature would take a request for a summary on a product based on search parameters and dig through the web of sourcing data and return a summary based on anything matching the parameters. For example, lets say I want to know if anything in my mouse was made by a certain company, or made in a certain country. Doing this search by hand would take some time, but could be easily discovered by the feature and returned quickly. This functionality could lead to a whole new dimension of comparison shopping, allowing consumers to quickly get a summary on several similar products and make a decision accordingly.

Undoubtedly, if Sourcery was ever created, there would be those who would decry it as unfair, but to my mind it is the essence of fairness. When we buy a product, we are in essence voting for a supply chain and a method of production which we may or may not actually desire or approve of. By being informed, we can make better choices, what ever those choices may be.

Continue Reading

Friday, June 19, 2009

0
Clearwire, Backlog, and other Mumblings

I will honestly admit to a case of literary blue-balls from the past two weeks. I wanted dearly to finish the second part of my rant on MMO structure, and perhaps there was a bit of stage fright which came along with it. Meanwhile, ideas and thoughts kept piling up in notepads and in the back of my mind, which has only added to the frustration over the incomplete status of the posting. Now that its out, I can finally update briefly on several fronts.

First, I am most excited to report that I have freed myself from the shackles of AT&T's DSL morass and have happily switched to Clearwire, a budding WiMax broadband Internet service. Every negative assumption I had prior to taking the plunge (due mostly to bad experiences with DSL and Cable net) happily turned out to be wrong. It was affordable, offering the same bandwidth for half the price, easy to order, dead easy to set up (quite literally plug and surf), and so far rock solid in terms of stability. As someone who is not a land line phone user nor a TV watcher, the ability to have simple, effective, reliable Internet access without the need for other services is a absolute boon. But enough gushing, undoubtedly I will have something to eventually grouse about.

I've finally made the plunge and got myself a YouTube account (gasp!) which I have noticed gives me a subtle motivator to produce new work, now that I have a venue in which to display it. For your viewing pleasure, here is my demo reel, updated slightly to include some additions which should have been included long ago.


Rest assured you'll be seeing a lot more articles soon, I have a couple big ideas that will all naturally clump together which I will hopefully begin on shortly. These won't be art related as they have more to do with some of my thoughts regarding technology and innovation. But those posts are for another night, and it's nearly 1AM. Goodnight folks!

Continue Reading

Friday, June 5, 2009

0
The Currencies of the Realm, Part 2

Ok, so it took a little prodding, two weeks of procrastination, and a weekend obsessing over the nightmare in Iran, but here we go with part two.

In part one I made much mockery of what could be called the standard MMO formula, and especially at issue was the ever present mechanic of grind, which exists for the dual purposes of leveling the players' character and providing a method of generating income. More often than not, this income is used to purchase gear to grind more efficiently. At the center of all of this is the imperative to gain experience in order for the player to be admitted into the next major segment of the games story narrative. Yet at the end of the day, experience seems to a somewhat awkward way to measure someones inherent immersion and affluence in a MMO. Its used as a one-size fits all metric for battle-prowess, story progression, social sway, and worldliness. All its really measuring, however, is how many hoops you've jumped through thus far. But enough ranting, you get my drift.

Experience points are really a holdover from tabletop role playing, a system where the player count is inherently limited and you are constantly under the vigilant guidance (be that helpful or not) of the GM. The job of a good GM is to provide a custom tailored role play experience that fits the current skill level of their players. In a MMO, the presence of literally thousands of players makes the task of such personalized story telling at this point completely impractical. So in order to compensate, the process is abstracted into a framework of preset quests and a environment full of generic beasts and enemies which act as a proxy to the personalized experiences a GM would provide. The core difference of course is that the GM is providing you with encounters as a method of furthering the player narrative, whereas in a MMO its really just used as a placeholder. A excellent example of the kind of bland game play experience this can breed can be found here in a excellent post by Eric Heimburg. Because of this fact, something has to rise in importance to fill the holes in the narrative. It needs to be something that you are striving towards, something that comes with a promise of more. Therefore, experience and level are elevated from their simple clerical function in tabletop systems where it is used primarily by the GM as a method of scaling the difficulty of the encounters. Instead, it becomes the driving force behind player action and in doing so loses what was fun about the system to start with. In some cases it is been taken to ludicrous extremes, such as Jade Dynasty (snarkily call AFK Dynasty) in which you can literally pay real money to have the computer grind for you. Colin Brennan of Massively.com has a amusing article on this, one which begs the question what is even the point of having a experience system if you are literally giving players every means possible to avoid confronting it?

So how do we create a system that meets the needs of a MMO? What would a system that promoted player interaction, the shifting needs of players, and accurately reflected all aspects of a character's influence? Ironically, the answer is to utilize some good old arbitrary metrics, but to do it more intelligently. Experience points aren't a problem because they are arbitrary, they are a problem because it is a metric ill-suited to the task at hand. So what do we want to measure? To my mind there are three metrics which really encapsulate a MMO experience: Economic Capital, Social Affluence, and Battle/Competition Effectiveness.

Economic capital is an obvious and already implemented feature in the vast majority of MMOs, be it coins, credits, very small rocks, etc. Aside from experience points, economic capital is probably the biggest motivators driving grind. However, its a essential part of any system where a capitalist framework exists. It would certainly be interesting to see a MMO or virtual world attempt to build a system without economic coinage.

Social Affluence is a feature which has at times been used by MMO's, but generally in a secondary role and most commonly only as a penalty system against greifers and trolls. However, such a metric could be used as a reward system for participation in social events, pro-social behavior, and generally any activity deemed to enrich the player community. In many ways this metric seeks to quantify a players contribute and inherent value to the continuing vitality of a community. A few posts ago I mentioned Cory Doctrow's Down and Out in the Magic Kingdom , a tale which featured a economy based completely of a social capital currency called "Whuffie." Whether a economy based exclusively on such a system is feasible or even desirable is a point of debate, and certainly one Doctrow explores in the book. But there is a definite utility in such a metric.

Battle or Competition Effectiveness is often the most common attribute associated with our old friend experience points. In simplest terms this is simply a metric for determining, well, your effectiveness in a competitive or combat related scenario. Often these are broken down into sub-categories of health, energy, stamina, armor, etc. There's nothing inherently wrong with this system either, but often in games there is a sense of inevitability and linearity that its association with experience imparts.

So up to this point, what's actually new? Not really that much, as all three metrics have seen some use in games. So why bother spell it all out? Because by abandoning the notion of experience points we can get to a core truism of the MMO: all three are really currencies. We don't generally think about them this way, but in fact that's what they are. The reason we don't think about them this way is with the exclusion of coinage, developers and designers tend to treat them in a very non-currency fashion. We can think of the game as a central bank with the power to print its own currency. This currency is handed put arbitrarily and incrementally to players for achieving certain goals, a meritocracy-based pay structure. Its at this point however, once the player has been given possession over their currency, that developers cease to treat it like currency. Players can seldom redeem their experience for anything aside from the promise of increased access.

So what if we did treat each as a full fledged currency from start to finish? What if a player could one day decide to cash in their battle-efficiency and purchase social affluence instead, allowing them to move from a play style centered on battle to one centered on socializing. As long as the developer finds clever and compelling services and products for each currency to be spent on, the market will for the most part balance itself. For a developer concerned about exploitation of the system, all they have to remember is that they control the spigots that allow each currency to flow into the system, and they control the cost of the goods and services. They can even control the exchange rates, should they choose to do so.

By making all these transactional, it allows developers to create activities which give rewards accurate and pertinent to the actual activity, and allows them to create self sustaining player dynamics by balancing the supply, demand, input and output of these currencies through the system. Admittedly, it seems counter intuitive to be able to trade ones physical attributes for money or social capital, but in many ways it fits better than the experience points model. For example, I could go to a blood bank tomorrow and depending on the blood bank, either donate my blood, thus gaining social capital, or sell it. Compare that to the notion that somehow by squishing 30,000 rats that a player is somehow qualified to slay demons. The only real world analogy which works here is that of muscle-building (which might as well be a form of real-world grinding).

This system need not spell doom for storytelling, which is still a vital part of any form of entertainment. Quite to the contrary it gives developers a greater power to influence player action and participation in the storyline. A story could very well roam from the battlefield to situations where social clout is necessary, to sections in which monetary wealth would be of great importance. By granting players the ability to trade between these three currencies, they are granted the flexibility for their character to evolve and grow over time. But as always, with great power comes great responsibility, and developers and storytellers will be harder pressed to create more rich and compelling worlds and story lines should they choose to use such a model. Perhaps for some developers the answer would lie in smaller linear narratives, much in the same way World of Warcraft includes new dungeons with patches. Others may be more ambitious and choose a branching open ended universe model, in which the narrative is defined by a constant set of conflicting choices presented by compelling NPC's and informed by a rich explorable world (Deus Ex was a early masterpiece in this regard). Still others may abandon developer dictated storytelling altogether in favor of creating game play mechanics which cause player-driven interactions to create stories of their own. The possibilities are nearly endless. It really boils down to the needs and desires of the developer. Some storytelling methods inherently are more developer intensive, and each carries with it a set of strengths and weaknesses which should be carefully considered.

After I wrote the first part of this article, a coworker who follows me online commented I was long winded. In retrospect I would have to agree, and hopefully this has been at least a tolerable read. I don't pretend that this system isn't without its problems, nor do I view it as a panacea to cure all the ills of MMOGs. This very well may be completely off base, but my hunch is that the MMO scene is looking for something fundamentally new, and this might be a good place to start.

Continue Reading

Wednesday, June 3, 2009

0
Random Thought of the Night

I promise to continue my crusade against vanilla flavored MMOG's shortly, but before I crawl off to bed I thought I'd share some musings about everyones favorite animation platform that no one knows about; Blender.

So I've decided to give learning Blender another go. It's become powerful enough and the industry is finally starting to catch wind, so its a good time to get reacquainted. I have to admit that while I'm finding some parts of the interface natural and intuitive, other things are just leaving me scratching my head.  Why, for instance, is it that when I'm deleting a obiviously redundant edge, I cannot simply remove the edge itself without completely obliterating the face upon which it rests? The other big thing that bugs me is the camera.  I want to orbit around the object of interest with a camera without the camera randomly beginning to keel over like a drunk penguin.  Its not exactly conducive to a rapid production pipeline.  Am I just being thick here? Do I simply not posess the knowledge of some painfully obvious preferences which would make my life easier?  Probably.  I hear the next release is slated to have all manner of UI improvements, which I welcome wholeheartedly.  Now all of this might simply be the grousings of a modeler who has invested a good chunk of brainmeat to conforming to Maya's similarly eccentric workflow.  But it sure as hell feels easier to do simple things like rerouting a edge loop flow in Maya while in Blender I feel like I'm asking it to fly to the Moon using only a peice of tinfoil and a q-tip.

Meh. Maybe I'll actually look at a tutorial for once tomorrow.

Continue Reading

Sunday, May 31, 2009

0
The Currencies of the Realm Part 1

This was going to be a single post, but as I've typed it, I've realized that no one in their right mind would read all of this at once.  So therefor, for your sake, here is part one of my rant on MMOG's.

Today I want to share with you something I came to realize while killing time at a Starbucks in the middle of rural Georgia (yes, I appreciate the irony too).  I've been giving MMOG design a lot of thought in recent weeks.  This may be due to the fact I've got a pet project in the wings that I'll share with you when the time is right, but for now allow me to continue.  I'm fairly certain I am not the only person in the world currently frustrated by the homogenous nature of the MMOG scene as it stands today.  With only slight variation and with rare exceptions, every MMOG released to date seems to simply be a repackaging of the same tried and true swords and sorcery formula which has changed little since our parents used to play it and it was called Dungeons and Dragons. This even goes for games which aren't high fantasy but are instead repackaged in a science fiction or "modern day".  That is merely different window dressing on what is essentially the same freaking window.  So what kind of a formula are we talking about here? Mostof us could probably rattle it off in our sleep, but for redundancy's sake lets spell it out:   

You are (insert character name here) a young and promising (insert gender, race, and profession here). You have arrived in (world name here) to find it a place full of danger, strip malls selling armor and weapons, and a seemingly endless supply of weak creatures to pillage and plunder.  Early on you are informed of the grave impending danger of (antagonistic power here) and are inevitably given a unavoidable quest which will surely lead to more unavoidable quests which will unquestionably set you on a collision course to go head to head against the dreaded (antagonistic power here).  But before you can Save the World (just like the guy who signed up right after you) you must prove your unswaying dedication to your goal and complete submission to the Powers the Be (the game devs) by grinding.  What is grinding you ask? Grinding is what game developers stick between Meaningful Content when they want to slow their players down. Its what makes MMO's financially feasible and it's what keeps the story team from absolute nervous breakdowns.  

But as a lowly player you're not supposed to remember any of that and focus on the task at hand.  In specific, you must plow your way through the faceless and oft nameless hordes of Slightly Weaker Beings in persuit of the coveted Experience Points.  Nowhere is it explained why on earth experience points(and their much more easily recitable cousin level points) are so damn important to everyone in (world name here) or even who in the world is even keeping trak of such a arbitrary statistic but its commonly accepted that they are Very Important indeed.  Provided you are a persistant and easily baited player, you will inevitably not only accumulate the sacred XP, but also a small landfill worth of Marginally Valueable Stuff which for some unexplained reason,  was being toted around at random by members of the faceless hordes you slaughter. 

Through your travels you learn that some of these items can combined together through some arbitrary process to create Expendable Items of Average Utility. They're always expendable, becuase if they weren't you might start to realize what a absolute drag grinding is.  Every blue moon or so, one of the faceless horde will drop the highly prized Rare and Obscure Item of Great Value, which of course can be used to either further your genocidal spree or to create a Very Expendable Item of High Utility. So you play on, as the quests begin to blur into each other, the story line lurching along in syrupy hiccups between hours of grind.  Then, just when you've fallen into the groove, the plot reaches its inevitable climax and you, the (gender, race, profession here) must summon your greatest effort and several friends with questionable attention spans and bad internet connections to defeat the One True Evil.  

Then its over.  You've reached the top and suddenly the grind has lost its meaning because now your experience points and your level suddenly seem so arbitrary.  After all, what does the person who has defeated the ultimate evil do after?  You hope that the developers will hurry up with that promised expansion, the one which will unlock a whole new world of faceless creatures and a extension to the plot, and along with it a reason to start grinding again.  But until then you are a lost soul, wandering about the world fighting the few surviving Very Difficult Creatures in the hope they will provide a challenge and more rare items.  But there's never enough of them, and even when you do defeat them, you've only joined the swelling ranks of the Players Who Have Done Everything, and you realize you look a awful lot like all the others. Depression sets in, and you stop playing, vowing to never play a MMO again.  Yet before long you find yourself sitting at your computer excitedly, with the brand new and highly marketed (game name here), hoping that this one is somehow different from the last.

I nearly fell asleep typing that.

 Now as I said there are exceptions, as there are to any sweeping generalization.  Games like Planetside and to an extent EVE Online try to mix things up by making it about PvP territory control. Some get more serious with their content creation tools, others try and mix it up with encouraging socialization.  But the truth of the matter is the sheer rank and file of the MMO scene follows the same pattern, and frankly as a gamer and as someone with game design aspirations, it's getting a bit old.  

Up next in part two of this rambling diatribe: I'll attempt to describe the beginnings of a system that actually lives up to the stratospheric expectations I've just set for myself.  

Continue Reading

Tuesday, May 19, 2009

0
Food for Thought

Bit=0 or 1
Nibble=4 Bit
Byte=8 Bit
Kilobyte=1024 Bytes
Megabyte=1024 Kilobytes
Gigabyte=1024 Megabytes
Terabyte=1024 Gigabytes
Petabyte=1024 Terabytes
Exabyte=1024 Petabytes
Zettabyte=1024 Exabytes
Yottabyte=1024 Zettabytes
Xonabyte=1024 Yottabytes
Wekabyte=1024 Xonabytes
Yundabyte=1024 Wekabytes
Udabyte=1024 Yundabytes
Tredabyte=1024 Udabytes
Sortabyte=1024 Tredabytes
Rintabyte=1024 Sortabytes
Quexabyte=1024 Rintabytes
Peptabyte=1024 Quexabytes
Ochabyte=1024 Peptabytes
Nenabyte=1024 Ochabytes
Mingabyte=1024 Nenabytes
Lumabyte=1024 Mingabytes

We are currently in transtition from the era of Gigabytes to Terrabytes. Already, there are predictions that by next year there will be a Zettabyte of information on the Net. Assuming Moore's Law remains true, what will we be capable of in a era of Udabytes, and how soon will that be?  Keep in mind at this early juncture we've already got enough storage to store a entire human genome.  Its also worth noting that I entered high school in the era of Megabytes and by the time I had graduated we were firmly in the realm of Gigabytes.  Over my stay in Maine I read Down and Out in the Magic Kingdom by Internet demigod Cory Doctrow (I highly reccomend it, by the way).  Perhaps we aren't quite as far away from a world of complete mental backups and digitally imparted imortality. 

Continue Reading

0
Regarding Silence

I fear I've put this post off for far too long. I created this blog as a way of communicating my thoughts honestly with anyone who cares to listen, so its a bit silly that I find myself trying to edit myself in regards to what to talk about.  Part of my hesitation I suspect is a deep seated need to appear as a capable and competent professional.  While such a need is a worthwhile consideration, I have recently come to the conclusion that using it as the determining factor regarding what I say here in this blog is counterproductive.  What is the point of me having a personal blog if I'm not using it to share what's on my mind? Is it really such a sin to actually talk about what's on my mind (within reason of course)?  Now this may be nothing more than literary navel-gazing, but as the saying goes, the first step to solving a problem is to admit you have a problem.
Continue Reading

Friday, May 15, 2009

0
Maine, Haircut, and a new Painting


So its been a few days, ok more than a few.  Here's what's been happening in my little corner of the universe.  The last week or so my fiance and I were up in my home state of Maine to get some prep work done for our wedding, and boy what progress we made!  We not only managed to get a cake baker and a florist, we picked up the dress (which I'm not allowed to see, so I can only operate under hearsay evidence), and also managed to blast down to Boston for a day to do our engagement photos with the awesome folks from JAGStudios.  I've been told they'll be posting the pictures tonight, and hopefully I'll have a link up to them in shortly!  While I wanted to have done in time for the shoot, I procrastinated and ended up getting a long overdue haircut. As much as I love the shaggy look, it just doesn't cut it down here in Atlanta during the summer.  Now I'm back down here in Atlanta, and by way of a apology for not posting during the last week I present a corporate monster:

Cute, neh?

I've also got about 4 or 5 ideas floating around in my noggin that need to be written down.  Perhaps later tonight I'll find the time to do it, or maybe Star Trek and engagement photos will consume my evening!

Continue Reading

Tuesday, May 5, 2009

2
The Deep Eye Viewer, or how to turn the SL machinima scene on its head.

I love the concept of doing machinima in Second Life.  No other platform gives a machinimatographer such absolute control over their work.  But by the same token, no other platform is quite as frustrating.  Second Life was never made with in depth character acting, dramatic lighting, advanced graphics, or cinematic camerawork in mind.   Lag, limitations in the graphics engine, and even in the avatar models themselves hold back a lot of Second Life's potential for high quality compelling cinema.  That said, the quality of work produced by SL machinimatographers is a testament to their creative will and technical savvy.  But is the nature of SL in and of itself the problem, or just one part of it?

I'd like to back up for a second and challenge one of the assumptions common in most forms of machinima today.  Nameably, the assumption that machinima is the direct per-pixel recording of live game engine output.  Why do we do this?  In closed game systems, such as Warcraft it makes sense, what appears on the screen is pretty much the only accessible output. True, you can perform a GL rip and get what ammounts to a 3D photograph of the game geometry, but in terms of using a engine as a filming apparatus, this method doesn't hold much mainstream value.  Some games such as Halo 2 and 3 open things up a little bit by providing a replay tool, which allows for more advanced shot sequences and interesting camera angles.  Yet nothing comes close to the openness of SL, which is literally streaming data about not only the characters surroundings, but also up to the second changes in status, all in a data stream accessed and interpreted through a open source viewer.   Why are we settling for a system of capturing the pixels cranked out by a video card working overtime, only to recieve footage of so-so quality when we could be capturing event data from this stream and saving it to a file for later rendering and yes, even editing.  Imagine what could be possible if you could shoot a scene, decide that your character came into the scene a bit too close to the camera, and instead of having to reshoot the scene, you could just select the character and shift his performance into the desired position.  Imagine being able to apply such forbidden wonders such as depth of field and raytraced lighting to your shots, where the only limitation to visual quality would be how long you wanted to wait for the final footage to render.  Machinima is supposed to marry the advantages of live action and animation, and a system like this would make good on that. It would also knock down the performance barrier, allowing those with less than stellar computers to still shoot beautiful works of machinima.  Call it crazy if you want, I call it the Deep Eye Viewer.

In essence the viewer would operate as follows.  The machinimatographer would set up their scene as normal, taking into consideration all of the normal concerns of staging, props, animations, etc.   They would then activate hit a pre-record mode on their viewer, which would scope out the surrounding area and take note of all major assets present.  This would include terrain, object UUID's, and position data (NOTE: not actually ripping the prim parameters, just getting a reference for later recall), avatar appearances and their intital positioning, and finally windlight settings.  This in essence creates a snapshot of everything that will be required later to re-rez the scene in a semi-local "sim" for rendering and editing.   Once ready, the viewer will prompt the machinimatographer who could then activate the "recording" mode.  This would begin capturing realtime animation and position data from the pre-recorded avatars in addition to the camera position and motion.  Once the machinimatographer is satisfied with the take they can stop recording.   The recording process can be repeated ad nauseum, with each recording saved as a unique "take" within the data file.  

To review or edit a piece of recorded footage, the machinimatographer would select a file from their hard drive and the scene would be loaded into a semi-local (assets are still being called from the grid) "sim".  The user could then play, pause, rewind, and fast forward through the captured data, edit scene element properties, and mute scene objects from visibility.   Muting is useful in cases where the user wishes to either specifically isolate certain scene elements (opening up the possibility of green screen for machinima) or to remove certain extraneous bits of the scene which detract from the overall effectiveness of the shot. The user could add additional cameras to the scene, in effect allowing for multicam setups of the same action.  Also addable would be advanced lighting setups to enhance the pre-existing lighting, such as spot-lights and negative-intensity lights to add areas of shadow. 

One particular addition to the scene data that would be exceedingly useful is the ability to overlay facial animation data onto a avatar's performance.  Lets face it, the current expressions in SL are clunky at best, and downright offputting at worst.  Imagine shooting a scene and then recording the facial acting through a computer vision system that uses a webcam to interpret your expression (oh yeah, its possible).  The same could be done for the hands, which right now are little more than just great big clunky mitts.  The level of nuace these enhancements could bring would be significant to say the least.  These are advanced functions, to be sure, but something which is going to end up having a major impact on the quality of machinima that is produced using such a system, and something for which Deep Eye would be uniquely suited for.

There are of course questions to be addressed before any of this can leave paper.  For example, would SL allow for the sort of on-demand asset-rezzing described, and if so what would its limitations be?  Additionally there are the obvious concerns of the scope of this project and the amount of effort that would be required to bring it to fruition.  Another valid question is how to marry this data playback to a rendering system.  My hunch would be to leverage existing rendering engines such as Blender's, although leaving the interface open to allow for user choice may very well be a valid option too. 

All in all this might be a crazy rant, but hey, that's what I'm here for.  

Continue Reading