Category Archives: Internet
Rabbit R1: First Impressions
First impressions of the Rabbit R1 are not great. The industrial design of the device makes me want to love it, but simple tasks fail in ways that quickly frustrate.
In no particular order, here are the stumbles so far.
Website
The Rabbit website looks pleasant, but the company chose a smaller than average font size that is painful to read. Wish they had favored accessibility over looking cool.
WiFi
WiFi currently sucks. The only way I can get back to a previously joined network on the R1 is to forget the saved password. To add insult to injury, tapping away on the virtual keyboard is torturous (small virtual keys, close together). The device supports Bluetooth, but only for speakers and headsets – not Bluetooth keyboards.
Connected Services
Of the four connected services (Music: Spotify, Ride Share: Uber, Food: DoorDash, and Images: Midjourney), only one I immediately want to use – music. I have a Spotify account, which I can connect to via the Rabbithole portal, but it never works. I connect, I test, it does not work, I delete and retry… I keep on seeing the “I could not start up the Spotify rabbit” error message.
Journaling
The journal feature (saved voice notes, images, etc.) looks like it might have some value, but only if I can easily get on WiFi. Otherwise, just using my ‘phone is the way to go.
Rabbit R1
The Wizard of AI
Carve out 20 minutes of your day and watch the excellent 99% AI-generated video essay ‘The Wizard of AI,’ Created by Alan Warburton and commissioned by Data as Culture at the Open Data Institute.
AI Tools Used:
- Runway Gen 2 to generate 16:9 ‘AI Collaborator’ video clips.
- Midjourney, Stable Diffusion and DALL-E 3 to generate still images.
- Pika to generate 3 second fish loops.
- TikTok for detective speech synthesis.
- HeyGen to generate AI talking detective head.
- Adobe Photoshop AI to expand images.
- Topaz Gigapixel AI to upscale images.
- Adobe After Effects to put everything together.
Trust me, you will thank me for watching this.
Bard Versus The New Bing
Invites to test Bard and the New Bing arrived within 24 hours of each other. The Bard invite arrived first, and I must admit to being underwhelmed. Bard was boring. I had heard the rumors that Google’s secret AI was leaps and bounds ahead of OpenAI’s ChatGPT, convincing at least one engineer of sentience. However, the experience was largely dull.
As alterative to regular search, Bard does not immediately offer up a convincing reason to stick with its services. The results take a little longer to generate and do not contain URLs. When searching for places to eat in Chicago, I had to independently Google Bard’s text results. Bard suggested two excellent options that met my criteria, but then suggested options that made little sense. I can see one potential future here, and that is in Augmented Reality, where Bard is a competitor to Alexa – vocalizing responses to my spoken requests. But this is only going to have value if Bard can demonstrate accuracy and link to actual resources on the internet.
New Bing is something else. It took a few clicks to access the new Bing (started up in Safari, did not like being in the Microsoft Edge Dev, but worked like a dream in the regular Edge) and it felt like I was in Las Vegas, which is both good and bad.
I was impressed that the new Bing (NuBing?) suggested a choice of conversational style: Creative, Balanced, or Precise. Somewhat ironically, I found myself Googling how to try the new features.
AI image generation (Image Creator) is baked into chat and initially works surprisingly fast and well. I was unable to get a widescreen image even though Bing told me it could change the aspect ratio of the results, and my request for a “dinosaur riding a kitten” was churned out as a kitten riding a dinosaur. But it did it fast. On a day where ChatGPT was up and down (and lacking historical chats) this was particularly impressive. Subtly, Bing was counting up to a limit of 15 with each image request. With only a few credits left, I asked for an image of a kitten dressed as Judge Dredd. Bing Binged itself with a search of Wikipedia and spat out some acceptable results.
I have no idea if these search results are being piped into the image prompt, but I like to think they are.
So, I will definitely be using the New Bing. Bard, not so much.
For kicks, here are some of the images that Bing was able to create.
Messing About With AI: Part 2
Going with some Dylan Thomas today. Thought the opening lines of “Do not go gentle into that good night” might be worth a go:
Do not go gentle into that good night,
Old age should burn and rave at close of day;
Rage, rage against the dying of the light.Though wise men at their end know dark is right,
Because their words had forked no lightning they
Do not go gentle into that good night.
DALL-E generated 4 options again:
DiffusionBee threw up what looked like a Norse word cloud:
Unimpressed with this, I added a “by Banksy” style modifier to see if this created something more visually arresting. I guess it did. Messing about with styles (drawing, visual, pen, carving and etching, camera, color, emotions, style of an artist or community, CGI software, and CGI rendering) is where I may have to add more direction.
So, I added a bunch of modifying styles. I then learned that DiffusionBee limits the number of text characters for the prompt. After removing a few, I ended up with this (Angry, Melancholic, Oil Paint, Dramatic, Surrealist):
Again, Craiyon gets appropriately angsty. Will have to try something more placid tomorrow:
Messing About With AI: Part 1
I signed up and/or downloaded several AI image-generating services recently. For kicks, I have started to post poetry and descriptions from classic novels to see what the results are. I started the process using one of the most celebrated poems ever: Catullus 85:
Ōdī et amō. Quārē id faciam fortasse requīris.
Nesciŏ, sed fierī sentiō et excrucior.
There are many English translations and interpretations, so I went with Wikipedia:
I hate and I love. Why I do this, perhaps you ask.
I know not, but I feel it happening and I am tortured
So, I posted this into DALL-E. The word “torture” was flagged as not appropriate, so I went with Google’s stock translation (which was accepted):
I hate and I love. Wherefore I do this, perhaps you ask.
I do not know, but I feel it being done and I am tormented.
DALL-E generated 4 options:
Options one and two are cheerfully banal, but three and four have a slight spark. Option three is my winner. And DiffusionBee seems to follow the same tack, generating this one image from the original text (no issues, it seems, with the word torture):
Craiyon‘s output definitely felt more teenage angsty. Their AI obvious has the machine soul of a poet:
Will try again tomorrow with something completely different.
DT&L Conference Registration Opens April 14
The Distance Teaching & Learning Conference (@UWMadison #UWdtl) is 100% online, and runs 2nd – 5th August, 2021.
Registration is just $329.00 for 75+ sessions from internationally-renowned Online and DistanceEd experts.
More information can be found at https://dtlconference.wisc.edu
Distance Teaching & Learning Conference 2020
For the first time ever, the Distance Teaching and Learning conference went fully online. This is my online diary, and placeholder for things I need to return to in the future.
I must admit that I missed being in Madison this time of the year but found the online conference to be considerably more efficient. This efficiency did have a downside – I admit to being in a state of continuous partial attention as I fielded work calls and requests simultaneously.
Surprising, Slack became a vibrant and well used part of the conference. Participation in Twitter significantly declined, with far fewer #UWdtl posts, live tweeting, and side conversations this year. Slack was the place to be. Messy, information overload, and chaotic. But also humanizing, filling a gap for those started of physical interaction.
Interaction in the sessions via services like Poll Everywhere, Google Docs, and Google Slides was variable, but paid huge dividends when it worked. My advice to presenters in the future is:
- Use an easy-to-type shortened URL (bitly) and have this on all sides during the interactive parts of the presentation.
- Make sure to activate your tool of choice before the presentation starts.
- Consider placing a link in the Guidebook App.
I got to moderate some of the sessions on Tuesday and Wednesday, which gave me a glance behind the curtain. The majority of presentations used Zoom as the backend, with moderators and presenters in a Zoom breakout room. Video footage (speaker video and shared screen) was passed to Mediasite for participants to watch. Participants could type questions via Mediasite’s Q&A speech bubble, to be relayed to the humble moderator and then read out to the presenter. The tech team behind all this were exemplary – fielding issues and questions with quiet grace and authority. The more interactive sessions used Blackboard Collaborate, and here all could talk and chat simultaneously.
The majority of sessions were recorded, and these recording made available a few weeks after the conference finished. Making these recordings available is something I particularly appreciate, but it does not look as if many have taken advantage of this – the views for many sessions are in single figures at present (one session that I missed, but want to watch is “Measuring Engaged Learning in Online and Blended Courses”).
There were a few themes that seemed to bubble-up during the conference:
- Understanding how to show caring for online students
- HyFlex
- Accessibility
- Online engagement
Tuesday
My colleague Margaret Workman presented a great eposter (Can we meet all of the learning outcomes in an online laboratory class) in the morning. The eposters were the perfect format – three 15-minute sessions were repeated over 45 minutes. This meant that you could jump from eposter to eposter like a series of speed sessions. In the virtual environment, this worked very well indeed. I followed Margaret’s session with Steve VandenAvond’s eposter (Creating Your Own Reality: The development of In-House Interactive VR).
Newton Miller gave a barnstorming keynote that really kicked things up and set a tone that continued throughout the conference. Historically, the conference has been very white. Black and brown faces are not as representative at the conference, and this is not a good thing, particularly this year. Newton’s keynote and Q&A posed a series of considerations that are both timely and important.
Thomas Royce Wilson was well-prepared for his “Cranky Colleagues v. Killer Robots: Helping Others Embrace Technology” which provided a useful framework for effectively collaborating with colleagues who might be technology-averse.
Each day ended with a “live wrap up.” This helped to reinforce the sense of community and a cohesive set of programming. The wrap up was also used to share pictures from the daily hashtag competitions.
Wednesday
HyFlex was a significant theme at the conference. Brandon Taylor, Janyce Agruss, and Amy Haeger shared their experience of teaching in the HyFlex modality (360-Degree View: Shared Experiences of a HyFlex Course Design Pilot) – a modality that now seems to be featuring heavily in the pans of most colleges and universities.
Mary Ellen Dello Stritto presented on “Using Course-level Data for Research” and shared Oregon State University’s “Online Learning Efficacy Research Database.” The database is a “searchable resource of academic studies on the learning outcomes of online and/or hybrid education in comparison to face-to-face environments.” I will definitely be taking a look at this later.
Maria Widmer and Claire Barrett presented on “Strategies for Connection and Belonging in Online First-Year Seminars,” in which I was reminded of the usefulness of “jigsaw discussions.”
Jean Mandernach’s presentation on “Teach More Students Without Increasing Your Instructional Time” was particularly interesting, and something I plan to dig deeper into. She also recommended a book that looks like it could add some value (Attention Management: How to Create Success and Gain Productivity – Every Day).
Thursday
Constance Wanstreet presented on “Learning Analytics and Gateway Courses: Keys to Student Success.” I think there is a gap here that the conference could fill by offering a beginner’s guide to learning analytics, with separate audiences for educators and administrators.
Trey Martindale’s “Online Learning and the Next Few Years in Higher Education: Follow the Money” was the highlight of the day. Not the happiest of analysis but argued well and definitely of value.
Tanya Joosten presented on “Empirical Approach to Identifying Digital Learning Innovation Trends.” Those trends are helpfully contained here, with more Information on the DETA site.
Oliver Dreon ran an engaging discussion (in Blackboard Collaborate) on “Researching online students’ perceptions.” I don’t know if this is a trend, but some institutions are moving away from using the QM rubric (which has a cost) to the (free) OSCQR (SUNY Online Course Quality Review Rubric). One thing I plan to investigate later came is this discussion
The instrument we adapted for surveying our online instructors is Bolliger, D. U., Inan, F. A., & Wasilik, O. (2014). Development and Validation of the Online Instructor Satisfaction Measure (OISM).Educational Technology & Society, 17 (2), 183–195.
Overview
The conference was surprisingly emotional – the feedback that I saw shared highlighted the sense of connectedness this year. Many attendees found the virtual format to be more efficient and productive. I don’t know how much of this structure will be used in future conferences, but I see the future as being more blended.
DTEN D7 75
Top of my wishlist for small/medium collaboration rooms is the DTEN D7 75. Basically a self-contained Zoom Rooms alternative that removes the need for a PC, credenza, and miscellaneous items in the room. Little bit spendy, though.