Twitter Bootstrap Carousel Full Markup Example

I really like Twitter Bootstrap but I’ve noticed that their example page only shows minimal markup for some of the more exciting features, leaving out little details like the “carousel-caption” class that I had to use view source to find.

If you were looking for how they do their clever captions at the bottom of each carousel image, look no further.

Since the carousel may or may not play automatically I recommend triggering it with Javascript, live example here.

Below is the full markup of a Twitter Bootstrap Carousel

<div class="container">
    <div class="row">
        <div class="span12">
            
            <div id="myCarousel" class="carousel slide">
                <ol class="carousel-indicators">
                    <li data-target="#myCarousel" data-slide-to="0" class=""></li>
                    <li data-target="#myCarousel" data-slide-to="1" class=""></li>
                    <li data-target="#myCarousel" data-slide-to="2" class="active"></li>
                </ol>
                <div class="carousel-inner">
                    <div class="item active">
                        <img src="http://mattlockyer.com/wp-content/uploads/2011/11/Example8-720.jpg" alt="http://www.mattlockyer.com">
                        <div class="carousel-caption">
                             <h4>First Thumbnail label</h4>

                            <p>Cras justo odio, dapibus ac facilisis in, egestas eget quam. Donec id elit non mi porta gravida at eget metus. Nullam id dolor id nibh ultricies vehicula ut id elit.</p>
                        </div>
                    </div>
                    <div class="item">
                        <img src="http://mattlockyer.com/wp-content/uploads/2011/11/Desktop2.jpg" alt="http://www.mattlockyer.com">
                        <div class="carousel-caption">
                             <h4>Second Thumbnail label</h4>

                            <p>Cras justo odio, dapibus ac facilisis in, egestas eget quam. Donec id elit non mi porta gravida at eget metus. Nullam id dolor id nibh ultricies vehicula ut id elit.</p>
                        </div>
                    </div>
                    <div class="item">
                        <img src="http://mattlockyer.com/wp-content/uploads/2011/11/tree.jpg" alt="http://www.mattlockyer.com">
                        <div class="carousel-caption">
                             <h4>Third Thumbnail label</h4>

                            <p>Cras justo odio, dapibus ac facilisis in, egestas eget quam. Donec id elit non mi porta gravida at eget metus. Nullam id dolor id nibh ultricies vehicula ut id elit.</p>
                        </div>
                    </div>
                </div>
                <a class="left carousel-control" href="#myCarousel" data-slide="prev">‹</a>
                <a class="right carousel-control" href="#myCarousel" data-slide="next">›</a>
            </div>
        </div>
    </div>
</div>

Enjoy!

Posted in Bootstrap, Code | 2 Responses

HTML5 Canvas API, Paper.js, Raphael.js, Processing.js: Performance, Benchmarks and a New Trick

An original benchmark posted here showed that Processing.js was lagging by 68-68% behind native Canvas API calls performing essentially the same thing.

I was curious where the bottleneck for Processing.js was in the original test so I exposed the processing canvas context to the sketch itself in the head of the document.

Then I simply used the native Canvas API calls from within the Processing.js draw loop like so:

//placed in head of document to expose processing canvas to sketch
var pctx = processingCanvas.getContext('2d');

//in processingjs draw loop
pctx.fillStyle = "#000";
pctx.fillRect(0, 0, 50, 50);
pctx.clearRect(0, 0, 100, 100);

My updated test is here and…

Using the native canvas api from within Processing.js is MUCH faster.

Perhaps the fastest (in my tests it outperformed the original Canvas API calls by 2-4% on Chrome and Android)!!!

Check this out to see how you can use the native Canvas API in your sketches for a performance boost!

Posted in Code, Processing | Leave a comment

Minecraft Steve Easy Costume Template (wine box design)

Recently was invited to a costume party. I really hate having to come up with a costume and buy a bunch of cheap crap that will eventually be thrown out or donated so I decided to go a little crafty this time…

I looked at some other templates for Minecraft Steve costumes but they were cumbersome and required too much custom cardboard (what a waste)!

So I decided that a simple wine box would do the trick and made my own template, hope you enjoy!

Minecraft Steve Template Wine Box PDF and PSD (.zip)

Posted in Unfiltered | 6 Responses

aMotion Toolkit Video Recording Preview

Developing a cross platform motion texture engine in Java for creating affective motion textures with a node based, data flow UI.

Using the Xuggle library for recording H264 video at a seemless 60fps.

Using LWJGL for OpenGL and OpenCL.

Using Jetty to create HTML5 UI that uses WebSockets so the application can be controlled through the web by phones and other devices collaboratively.

Posted in Works, School | Leave a comment

Mutlimedia Programming

A while back I taught a course at my school titled: Multimedia Programming for Artists and Designers.

The course introduces artists and designers to the basic concepts of “drawing and animating with code”.

The course leveraged some of the latest web technologies to create interactive and engaging content serving as both lecture and study material.

Here is the website: http://www.mattlockyer.com/multimediaprogramming

Posted in Works, School | Leave a comment

Friend Flip

In my spare time I have put together a simple, clean app called Friend Flip.

I made Friend Flip because I was sick of scrolling through my news feed like a zombie (when I even did at all). I decided matching up my friends and training my brain would be much more fun. Friend Flip is built with Adobe Air and was done front to back in 1 week. I used Stage3D and ND2D as a library for that, even though I probably didn’t need to ;). Also used the Actionscript Facebook API, Minimal Comps for buttons, and that’s it.

Try it here: https://play.google.com/store/apps/details?id=air.air.FriendFlipAir

Here is the app description on Google Play:

Train your brain and check your FB news feed at the same time!

An exciting way to train your brain and keep up to date with your friends. Friend Flip is a memory game based on your real-time Facebook news feed. Flip and match friends to read their status updates or just flip out! Have fun and enjoy!

Note: You will need to be online and have a FB account to play.

Posted in Works | Leave a comment

Genetic Algorithms the Easy Way

Most of my work to date has been reactive, that is there is a 1-1 connection between what the user does and what the application does.

Recently for a class project I decided to use Genetic Algorithms to enhance the user experience and potentially augment some of the creative processes of working with a tool.

I always stayed away from this type of coding because it didn’t match the types of applications I was interested in making.

Additionally GA has always shrouded in mystery for most coders since the concepts are described using genetic metaphors and binary crossover operations.

I took a somewhat different tact for this assignment and I’m posting this for the benefit of anyone who is interested in a simple GA implementation for creative coding.

This assumes the user makes an input that can be mapped directly or indirectly to a property of your system.

Here are the steps:

  1. Make user input your fitness value (user did x, x = fitness)
  2. For all objects in your system measure distance from fitness value to matching object property
  3. Sort results
  4. Collect top 10% – 25% of the objects with closest property values to the fitness value
  5. For all objects, select 2 or more candidate objects randomly and average their property assigning it to the current object
  6. Optionally add in the fitness value itself (weighted by a learning factor)
  7. Optionally add in a random value (weighted)

My application maps user gestures to motion properties such as: linear angle, linear speed, wave amplitude, wave speed, curve speed and so on…

By employing the use of this simple GA, the user’s actions are not mapped 1-1 to a specific resulting motion type. Instead the system appears to “learn” the desired motion by virtue of the selection and mutation processes outlined in the steps above.

Here are some initial results (does not really show the GA part, but until the app is released assume the experience is amazing):

Posted in Works, School | 1 Response

aMotion Toolkit – Submission to CAe 2012

Demonstration video from research at SFU.

Submitted to Computational Aesthetics 2012.

click to enlarge

click to enlarge

click to enlarge

Abstract from paper:

Visual artists and designers frequently use carefully crafted motion textures — patterns of ambient motion throughout a scene — to imbue the atmosphere with affect. The design of such ambient visual cues is an elusive topic that has been studied by painters, theatre directors, scenic designers, lighting designers, filmmakers, producers, and artists for years. Recent research shows that such motion textures have the capacity to be both perceptually efficient and powerfully evocative, but adding them to scenes requires careful manipulation “by hand”: no tools currently exist to facilitate this integration. In this paper we describe the design and development of the aMotion toolkit: a palette of composable motion “brushes” for image and video based on our affective motion research. We discuss insights from an on-going qualitative study with professional visual effects de-signers into how such capabilities can enhance their current practice

Posted in School | Leave a comment

Adobe Air Mobile Development Talk @ Vancouver Flash Platform Meetup

 

Discussing issues of “deploy everywhere” mobile technologies in general.

Contents (in order):

  • Pros and cons of: Unity, Corona SDK, Marmalade, PhoneGap, Titanium, HTML5 / CSS / JS.
  • Pros and cons of Adobe Air platform for mobile development.
  • Development samples of Adobe Air APIs.
  • Discussion of performance issues, and performance tips.
  • What’s broken at Adobe?
  • Q + A
Posted in Works, News | Leave a comment

Best Paper CAe 2011

Last summer my research won the best paper at Computational Aesthetics 2011 hosted here in Vancouver.

Link to papers

Here is the abstract from the paper:

The communication of emotion and the creation of affect are core to creating immersive and engaging experiences, such as those in performance, games and simulation. They often rely on atmospheric cues that influence how an environment feels. The design of such ambient visual cues for affect is an elusive topic that has been studied by painters, theatre directors, scenic designers, lighting designers, filmmakers, producers, and artists for years. Research shows that simple motions have the capacity to be both perceptually efficient and powerfully evocative, and motion textures – patterns of ambient motion throughout the scene – are frequently used to imbue the atmosphere with affect. To date there is little empirical evidence of what properties of motion texture are most influential in this affect. In this paper we report the results of a study of simple, abstract motion textures that show path curvature, speed and texture layout can influence affective impressions such as valence, comfort, urgency and in.

Posted in School | Leave a comment