Investigation of Virtual Worlds for Art & Design Teching and Learning at Leeds College of Art.
Wednesday, November 18, 2009
Video Streaming on the Collective Island
With our college getting a Quicktime Streaming Server up and running, I can finally look at some ideas of how to combine Live Video with Second Life activities.
Certainly it starts opening up the opportunity of broadcasting live performance works at the college into a SL gallery context, as well as RL/SL mashup performances.
I certainly want to look at ways of getting the students to take ownership of it - as a way of creating live shows / and mixed reality events.
What's good is the server is constantly on ( well if it doesn't crash ) - and the streams are accessible from outside the college firewall. Presently live broadcasts have to be done from within the college, so we'll have to see if in the future we can get access to the Quicktime server when we're offsite.
Something I'm quite excited about exploring in the future, is how it can also be tied into Life Long Learning & Creative Industry events - by streaming guest speakers from the college into SL - to be accessible by far a field Alumni, and 'friends' of the college who can't attend in RL.
Anyway , that's the new toy to play with for a while....
Monday, October 12, 2009
Exploring Shot Composition Basics with SL
I've played with creating some Photography Grid HUD's before, and I thought I'd refine it into a post comparing two classic principles.
Rule of Thirds
The Rule of Thirds, is the division of the frame into equal thirds vertically and horizontally. The intersection of the lines are considered focus of interests, and the lines themselves can help balance out the element of the shot.
The focus of interest here is the face - which falls into the topright intersection. The body ( mainly head and spine ) falls on the right vertically third. and the ground lies along the bottom third.
The Final Shot -
more info on Rule of Thirds here
Dynamic Symmetry
Dynamic Symmetry is based on the Golden Section. Draw lines diagonally from each corner of the frame, then draw lines perpendicular to the first lines that go back to the opposite corners. Again the intersections of the lines are the hotspots of interest.
Again the focus of interest is the face, falling into the top right intersection. The body falls vertically between the two areas of interest, and the ground runs through the areas of interest on the bottom.
The Final Shot.
Some further info on Golden Section / Phi 1.618 here -
and quite a creepy video on Golden Mean here - http://www.youtube.com/watch?v=2zWivbG0RIo
With either principle of composition, it is important to keep things simple - particularly having only ONE focus of interest.
Personally I prefer the dynamic symmetry layout - which seems more visually exciting when applied to a HD video framing.
Rule of Thirds
The Rule of Thirds, is the division of the frame into equal thirds vertically and horizontally. The intersection of the lines are considered focus of interests, and the lines themselves can help balance out the element of the shot.
The focus of interest here is the face - which falls into the topright intersection. The body ( mainly head and spine ) falls on the right vertically third. and the ground lies along the bottom third.
The Final Shot -
more info on Rule of Thirds here
Dynamic Symmetry
Dynamic Symmetry is based on the Golden Section. Draw lines diagonally from each corner of the frame, then draw lines perpendicular to the first lines that go back to the opposite corners. Again the intersections of the lines are the hotspots of interest.
Again the focus of interest is the face, falling into the top right intersection. The body falls vertically between the two areas of interest, and the ground runs through the areas of interest on the bottom.
The Final Shot.
Some further info on Golden Section / Phi 1.618 here -
and quite a creepy video on Golden Mean here - http://www.youtube.com/watch?v=2zWivbG0RIo
With either principle of composition, it is important to keep things simple - particularly having only ONE focus of interest.
Personally I prefer the dynamic symmetry layout - which seems more visually exciting when applied to a HD video framing.
Sunday, June 21, 2009
Design for Digital Media - End of Year Show 2009 in Second Life
Now open to the public - The Collective Island is host to the Leeds College of Art's - End of Year Show for the BA(Hons) Design for Digital Media course..
In the studio building, you can find examples of Development work and Screenshots from Final Year Project work, plus links to their web-portfolios.
Streaming in the second room of the studio building, is a 15 min showreel of Moving image work... on it you'll find a Personal tale about Epilepsy, 3D Animation, and Music Videos...
plus its got a nifty reflective floor... :)
The big part of the SL exhibition is the work of Jetsunami Duell ( Rob Kirk in RL ) - who's built a half sim medieval city inspired by the Discworld books. It is designed for Virtual World Community Role players, as opposed to being a game level. The work has its own distinct look - with all textures been hand rendered... giving it a cartoony look...
The Show will be up for several weeks, but once is gone - it'll be gone... wiped clean for a new semester of teaching.
SLURL - http://slurl.com/secondlife/The%20Collective/189/33/36
Wednesday, June 03, 2009
Exploring Blue Mars
Blue Mars by Avatar - is another MMORPG/VW environment on the block. Eventually giving developers a big set of tools to create a wide range of online experiences. Blue Mars creates a unified login system, allowing your avatar to move between diverse Virtual Spaces using the same tech.
Montage of the Blue Mars demo spaces, showing how the technology can be used for a community spaces and games. It uses CryEngine 2 for the graphics, giving some impressive shaders for high quality graphics in a VW. Lots of potential for developers to create rich environments and online game content.
For Educators, this is no SL replacement in terms of casual idea development and experimenting.. it is a Dev Heavy technology, but has bags of potential for some Serious Games, or creating visually rich virtual campuses.
Also its now a good opportunity to consider bringing in students/staff on Games Design / Digital Media courses to help develop content for other departements.
Its hard to judge how useful Blue Mars will be based on these Demo's, as ultimately you can create whatever space you want, so its now down to some exciting developers and community building stuff going on, if it is to succeed.
Of the short time I've played with it , and looked at the other BM preview tools, my vibe is that it is more comparable with Multiverse - http://www.multiverse.net/index.html , than Second Life.
Also if you want to get a good feel for Blue Mars - you could also get a copy of Crysis for your PC ( which you can find in the bargin bin now ) - as it comes with the Sandbox Editor modding tools, that allows for machinima making too. - here's an example - http://www.youtube.com/watch?v=8R_i-0lZoj4
Montage of the Blue Mars demo spaces, showing how the technology can be used for a community spaces and games. It uses CryEngine 2 for the graphics, giving some impressive shaders for high quality graphics in a VW. Lots of potential for developers to create rich environments and online game content.
For Educators, this is no SL replacement in terms of casual idea development and experimenting.. it is a Dev Heavy technology, but has bags of potential for some Serious Games, or creating visually rich virtual campuses.
Also its now a good opportunity to consider bringing in students/staff on Games Design / Digital Media courses to help develop content for other departements.
Its hard to judge how useful Blue Mars will be based on these Demo's, as ultimately you can create whatever space you want, so its now down to some exciting developers and community building stuff going on, if it is to succeed.
Of the short time I've played with it , and looked at the other BM preview tools, my vibe is that it is more comparable with Multiverse - http://www.multiverse.net/index.html , than Second Life.
Also if you want to get a good feel for Blue Mars - you could also get a copy of Crysis for your PC ( which you can find in the bargin bin now ) - as it comes with the Sandbox Editor modding tools, that allows for machinima making too. - here's an example - http://www.youtube.com/watch?v=8R_i-0lZoj4
Tuesday, May 26, 2009
Virtual Environments Module - Year One
In the future I'll write a proper longer post about this module, but for now, here's some pics and link to a flickr set for a Virtual Environments module that finished last week.
Flickr Set of Projects
In the module, the students were set a brief to create a prototype experience based around an area from the DirectGov website . The students were advised to focus their projects on the themes of 'Environment & Greener Living' or 'Health and Wellbeing'
It was decided to take this module into a more corporate use of Virtual Worlds, rather than a more personal immersive experience, so the students could experience and reflect on designing for a broader audience and content creation issues. It was also an introduction to how games and VW's could be used for serious subject matter.
Sunday, April 26, 2009
Building a new 'lighter' Communal Whiteboard
Its nearly been 3 years since I made my last whiteboard, and I thought it was time to do some tweaking to make a more 'lighter' version.
This 'lighter' version removes the troublesome overlay tools, that seemed rarely used... and focused on just one 'Pointer'.
It uses the newish - llDetectedTouch functions - which allows a user to touch anywhere on the image - and the pointer will move to it. This is so much more friendly than before, originally I had to take control of the avatar - and use arrow keys to move the pointer around, which was made worse when the SL client had the 'Release Keys' button removed.
The new whiteboard can also be turned to any angle, without effecting this function... again the original used sim co-ords, so you had to stick the Board either North,South,East or West facing, and soooo many people wanted it at 45 degrees in the corner of a room.
You can also 'Scale' the board to fit your place, without any adverse action to the Aspect Ratio button on the board - which now set to be proportionate of the scaled board's size. So resize the board down to 2ms wide if you want it in your office, or keep it 10ms wide for the lecture hall.
It still keeps the most important thing though .... group access.
Allowing other avatars to add images to the board by holding down the CTRL key while dragging images/textures over it and also allow other avatars to delete images from the board's slideshow.
A simple addition but it allows a group to add to and edit a group slideshow / photoessay.
If you are more of the didactic nature, you can always switch on the Lock, which only allows the Owner to work with the board.
So, just giving it a little roadtest first, and then in a week or so... put it out in the wild.
Wednesday, April 22, 2009
Creating Content in Second Life ( slideshow )
Presentation from Learning in Virtual World Conference at Sunderland on 21st April 2009 - supported by RSC-Jisc Northern and HE Academy.
This slideshow is aimed at people with little to no knowledge of content creation tools in Second Life.
Creating Content in Second Life
View more presentations from angrybeth.
This slideshow is aimed at people with little to no knowledge of content creation tools in Second Life.
Using the Multi-Cam Machinima Switcher
A while ago, I built a multi-camera switcher in Second Life, when the new CameraParameters function was added to LSL....[link] , a little while later I updated it - with cam points the user could move, via editing the linked prims...[link] and then eventually I got round to writing a blog post about it. ( amazing what you can achieve in 3 years ... )
WHAT is the Multi-Cam Machinima Camera Switcher (MSMCS)?
This tool creates instant vision cutting between 8 camera setups much like a TV Vision Mixer swtiching between multiple cameras in a studio. Using the MSMCS you can block out a range of camera positions, that when you go into production, you can jump between. This is particularly suited for when Machinima is streamed LIVE out of Second Life particularly Interviews style shows, allowing the camera operator to cut from Closeups of Avatars to a wider shot of the stage etc.
For educators, this tool can also be used to exlpore conventions of Film and Video production, particularly Crossing the Line ( 180 degree rule ) or Jump Cuts. As well as developing an understanding of Multi-Camera shoots, when access to Real Life equipment is limited. Also it could be used as a way of quickly developing animatics for video productions.
For Machinima Filmmakers - understand the limitations of this tool, it may be useful for some circumstances and not for others.
HOW to USE the Multi-Cam Machinima Camera Switcher. (MCMCS)
1. Build your Set, Stage, Interview Room.....
2. Rez the MCMCS - and place the big grey square so its somewhere in the middle of your set.
3. Whilst in 'Edit Mode' - move the MCMCS vertically down until the big grey block is under your set...
4. Now to edit the individual cameras - in edit mode panel - click on edit linked parts --- ( click on image below to enlarge it )
5. Each Camera is identified by Colour and settext above it... ( red is camera one ) - the sphere prim is the camera's position , the cube prim the camera's target (ie. where you want to look). Whilst in edit linked parts mode, move the sphere and cube prims to set up the camera shot.
6. Sit on the camera to see what the shot looks like... ( sitting on the camera makes all the cam prims and particles disappear )
7. Camera One ( the red one we just edited ) is selected by pressing the Up Arrow, and your camera view should be updated like so...
8. Stand up your avatar , and go back to point 4, and repeat the process with all the other coloured pairs, to give you upto 8 different camera shot framings.
Each Camera Prim has floating text above them denoting the camera number and the corrosponding arrow key that needs to be pressed.
-------
If you think this is useful for you - you can pick up a copy outside my TechGrrl Store for L$ 25
http://slurl.com/secondlife/Gourdneck/197/233/67
WHAT is the Multi-Cam Machinima Camera Switcher (MSMCS)?
This tool creates instant vision cutting between 8 camera setups much like a TV Vision Mixer swtiching between multiple cameras in a studio. Using the MSMCS you can block out a range of camera positions, that when you go into production, you can jump between. This is particularly suited for when Machinima is streamed LIVE out of Second Life particularly Interviews style shows, allowing the camera operator to cut from Closeups of Avatars to a wider shot of the stage etc.
For educators, this tool can also be used to exlpore conventions of Film and Video production, particularly Crossing the Line ( 180 degree rule ) or Jump Cuts. As well as developing an understanding of Multi-Camera shoots, when access to Real Life equipment is limited. Also it could be used as a way of quickly developing animatics for video productions.
For Machinima Filmmakers - understand the limitations of this tool, it may be useful for some circumstances and not for others.
HOW to USE the Multi-Cam Machinima Camera Switcher. (MCMCS)
1. Build your Set, Stage, Interview Room.....
2. Rez the MCMCS - and place the big grey square so its somewhere in the middle of your set.
3. Whilst in 'Edit Mode' - move the MCMCS vertically down until the big grey block is under your set...
4. Now to edit the individual cameras - in edit mode panel - click on edit linked parts --- ( click on image below to enlarge it )
5. Each Camera is identified by Colour and settext above it... ( red is camera one ) - the sphere prim is the camera's position , the cube prim the camera's target (ie. where you want to look). Whilst in edit linked parts mode, move the sphere and cube prims to set up the camera shot.
6. Sit on the camera to see what the shot looks like... ( sitting on the camera makes all the cam prims and particles disappear )
7. Camera One ( the red one we just edited ) is selected by pressing the Up Arrow, and your camera view should be updated like so...
8. Stand up your avatar , and go back to point 4, and repeat the process with all the other coloured pairs, to give you upto 8 different camera shot framings.
Each Camera Prim has floating text above them denoting the camera number and the corrosponding arrow key that needs to be pressed.
-------
If you think this is useful for you - you can pick up a copy outside my TechGrrl Store for L$ 25
http://slurl.com/secondlife/Gourdneck/197/233/67
Labels:
machinima,
second life,
teaching tools,
transferable skills
Thursday, April 16, 2009
How Big is the Mono Lisa?
Mona Lisa, 1503-1506, Leonardo da Vinci
unless you've been to The Louvre, most of us have experienced this painting through books, the web and student posters... but actually how big is it?
Basically I'm having some thoughts on how to entice the Critical and Contextual Studies dept onto our Second Life island...
A quick straw poll of students around the college when asked how big was the Mona Lisa, only a few could give an accurate size. Though all are familar with the history of the painting, the feeling for its real size is, I suspect, skewed by props on TV & Film, seeing the picture on the web, and proportionate to its famousness.
The Mona Lisa is 77cms x 53cm ( 30inch x 20 7/8inch )
One thing I've always liked about Second Life galleries is you can get a sense of scale of a painting or image (as the artist may or maynot of intended), something that's lacking when you see the same image embedded in a webpage or powerpoint presentation.
Having the avatar to scale the art against at least starts lending itself to an understanding of the intentions of the artist..
Here's Pablo Picasso's Guernica, again you can get some engagment with the scale of the piece, which would be lacking from a book or web-based picture.
With a bit of skill, a tutor, rather than using a powerpoint presentation, could present a virtual tour of a gallery space ( be it as a group, or simply a tutor's view presented on a video projector ) could create even more opportunities to discuss the art works.
I'm using fine art works as an example - but the same principle could also be applied to graphic design and advertising - looking at the use of scale with posters & billboards, by placing them in galleries as well as simulated spaces (eg. shopping mall ) - ( hmm, maybe a corporate example, but hopefully you get the point ) - allows for a critical discussion of how the image works in the space and its intended audience. Particularly useful for spaces, that students might not get ready access to.
Secondary, having a 3D online gallery to place work in, is also a great tool to explore some of the curatorial skills of a putting an exhibition together. Not only does a student have to sort out the collection of images, but can think about how they are placed within the space, and against each other. This can translate to a real life show - allowing several options to be considered before hanging the work.
One thing to take into account is Second Life's propensity for taller than average height avatars,
and the default camera position - does make things feel smaller than reality, so playing with the camera ( viewing in mouselook ) and using other props that allow for a sense of real world scale will compensate for this.
Personally, I still think its important to go on the physical field trips to a gallery when one can, but its great to see projects like this arriving in Second Life - a replica of The Old Masters Pictures Gallery, Dresden - that's only a TP away..
SLURL - http://slurl.com/secondlife/Dresden Gallery/128/128/27
Labels:
second life,
teaching spaces,
transferable skills
Sunday, March 29, 2009
Playing with Processing, Pachube and Second Life
Continuing my play with pachube - which is really handy if you want to quickly prototype something that connects SL and RL stuff.
A Camera tracking script in Processing, updating info in Second Life, to rez a prim at similar position.
A Camera tracking script in Processing, updating info in Second Life, to rez a prim at similar position.
Saturday, March 28, 2009
Playing with Pachube
Finally had a little time to investigate a web-based service called Pachube. For those that have not heard of it, Pachube is a service that enables users to share and connect real time sensor data from objects, devices and environments both real and virtual.
This windsock object in Second Life, is connected to an Output feed from a Weather Station in Grimsby, that returns data as a comma seperated values.
The windspeed data, is used to effect the tension of the flexiprim that makes the windsock, making it visualise windspeed in a more familiar fashion.
Anyway, this was a simple exercise to see how Pachube works with an object in SL.
Using this principle, Second Life environments can be controlled or respond to realworld realtime data through Pachube, for example an Arduino Microcontroller, that allows for some physical interactive object/wearable computer to control Virtual Content. The reverse is true, and objects in Second Life can be used to send data to a Pachube feed to be shared, allowing for an avatar - to make a realworld installation change.
In my experimentation, I had a play with Processing, and created a simple sketch that was effected by Second Life data on a input feed I created. It is important that if you are using Pachube with Processing you download the EEML Library.
Whilst Pachube is in beta, data is refreshed every 5 secs, so GET(ing) and POST(ing) data to a feed, can't be any faster. So for the moment, don't expect to create dynamic updating several times a second... ( which would be great if I was pulling Avatar data into some Motion Graphics program in Processing )
Whats great with Pachube though, is that its a community of shared data, so many users could take the data and use it in different ways.... particularly for myself, I may take real world information simply to be part of some generative artwork in SL, whilst others may want to visualise the same data in another way. From this, interesting and unexpected results could happen..
The key aim is to facilitate interaction between remote environments, both physical and virtual. Apart from enabling direct connections between any two environments, it can also be used to facilitate many-to-many connections: just like a physical "patch bay" (or telephone switchboard) Pachube enables any participating project to "plug-in" to any other participating project in real time so that, for example, buildings, interactive installations or blogs can "talk" and "respond" to each other.It is currently in beta, so I had a little play to get an idea of the fundamentals. To make full use of the service you need to be given an API key, to allow you to access outputs and create inputs.from Pachube website
This windsock object in Second Life, is connected to an Output feed from a Weather Station in Grimsby, that returns data as a comma seperated values.
The windspeed data, is used to effect the tension of the flexiprim that makes the windsock, making it visualise windspeed in a more familiar fashion.
Anyway, this was a simple exercise to see how Pachube works with an object in SL.
Using this principle, Second Life environments can be controlled or respond to realworld realtime data through Pachube, for example an Arduino Microcontroller, that allows for some physical interactive object/wearable computer to control Virtual Content. The reverse is true, and objects in Second Life can be used to send data to a Pachube feed to be shared, allowing for an avatar - to make a realworld installation change.
In my experimentation, I had a play with Processing, and created a simple sketch that was effected by Second Life data on a input feed I created. It is important that if you are using Pachube with Processing you download the EEML Library.
Whilst Pachube is in beta, data is refreshed every 5 secs, so GET(ing) and POST(ing) data to a feed, can't be any faster. So for the moment, don't expect to create dynamic updating several times a second... ( which would be great if I was pulling Avatar data into some Motion Graphics program in Processing )
Whats great with Pachube though, is that its a community of shared data, so many users could take the data and use it in different ways.... particularly for myself, I may take real world information simply to be part of some generative artwork in SL, whilst others may want to visualise the same data in another way. From this, interesting and unexpected results could happen..
Thursday, March 12, 2009
UCreative - Second Life Island
Back in Summer of 2008, I was asked to build a Second Life presence for University of Creative Arts - (a specialist art uni, which is an amalgamation of five UK based colleges in Kent and Surrey ). As the campus was split across several towns, SL was being explored as a potential way of creating a virtual community of practice, where graduate and post-graduate students could get together.
I was commissioned to create a basic low prim scaffolding for an interdisciplinary island of arts, and also create something that gave a imaginative flavor of what could be achieved in virtual worlds, rather than slavishly re-creating the realworld campus.
The build was designed to get people navigating and around a 3D space fast especially by flying, but a teleport system was also in place to get around quicker, or those still learning to control their avatar.
The approach I went for, was to create a Giant Tree, where the foundations of its base, formed the main lecture theater spaces, as well as public areas of exhibition and marketing - a scuplture/art garden, a more traditional white cube gallery space, and a sandbox area, on the ground.
Very simplified tree branches formed platforms, that would be given over to specific areas, eg. moving image, fashion, communication design, and also staff development. Finally high up above the clouds was a private sandbox area.
This use of height within a sim, was to make efficient use of the island, rather than everything flat on the ground.
Hopefully this system also keeps information/areas apart, but not completely compartmentalised. As it was a shared island, discpline areas could share space and resources - creating opportunities for students to work across courses.
Sunday, March 08, 2009
Avatar Skin and Clothes Map
Avatar skin and clothes map for workshop in content creation in Second Life for 1st Years. Feel free if you think its useful.
Download
Avatar Attachment Points
Map of Avatar Attachment Points... for workshop teaching SL content creation to 1st Years. Feel free to download a copy if you find it useful.
Download
Sunday, February 22, 2009
WoW educators on Twitter
Educator?
on Twitter?
play World of Warcraft?
then join us on http://twittgroups.com/group/eduwow
or if flickr is your social media tool of choice
then share your pics at
Educators in World of Warcraft flickr Pool
Labels:
mashup,
teaching spaces,
world of warcraft
Friday, February 13, 2009
Subscribe to:
Posts (Atom)