I attended the Adobe Chicago Generation AI event this week. Here are some brief notes:
Digital Literacy & Access: University-Wide & Classroom Perspectives
I was particularly impressed by the presentations from Professors Justin Hodgson (Indiana University) and Sheneese Thompson (Howard University), who shared valuable insights on digital literacy initiatives. They discussed both university-wide strategies and classroom-specific approaches for expanding access to Adobe tools. Their practical examples of how these tools can be integrated into diverse educational environments were especially helpful.
Justin spoke about the Digital Gardener Initiative and shared this resource.
Future Forward: Reimagining Career Readiness for the Next Generation
Joshua Meredith and Clark Edwards (both from Deloitte) presented on “Future Forward: Reimagining Career Readiness for the Next Generation.” Their analysis of emerging workforce trends recommended that educational institutions need to adapt to prepare students for success in an AI-transformed job market.
Slides were not shared, but the Deloitte report (Preparing students for an AI-driven workforce and the future of work) was shared.
For Good: Navigating Wicked Problems with Adobe Express and Acrobat AI Assistant
Shauna Chung led a demonstration of Adobe Express and Acrobat AI Assistant, with a hand-on workshop (For Good: Navigating Wicked Problems). Two tools were highlighted as part of the session:
AI Assistant in Adobe Acrobat: This tool operates as a NotebookLM competitor (without the podcasts). The value proposition Adobe offers here is that your documents are retain their privacy and are not used for training. According to Adobe, OpenAI’s GPT models power the backend. In quick tests, it worked well for me. However, the prompt window had a character limit of about 500 characters. I imagine that the context window for AI Assistant is not as large as NotebookLM’s.
Adobe Express: I see tacit admission from Adobe that the Creative Cloud tools are intimidating to new users, with a UX that is designed for established users. Adobe Express is positioned as the platform for new and occasional users. The generative AI tools are positioned as more ethical than the competition.
The organizers provided six-months free access to the tools.
Overall, I found the sessions to be very helpful. I hope future events have a dedicated workshop-only option for faculty and staff getting up-to-speed with the tools.
First impressions of the Rabbit R1 are not great. The industrial design of the device makes me want to love it, but simple tasks fail in ways that quickly frustrate.
In no particular order, here are the stumbles so far.
Website
The Rabbit website looks pleasant, but the company chose a smaller than average font size that is painful to read. Wish they had favored accessibility over looking cool.
WiFi
WiFi currently sucks. The only way I can get back to a previously joined network on the R1 is to forget the saved password. To add insult to injury, tapping away on the virtual keyboard is torturous (small virtual keys, close together). The device supports Bluetooth, but only for speakers and headsets – not Bluetooth keyboards.
Connected Services
Of the four connected services (Music: Spotify, Ride Share: Uber, Food: DoorDash, and Images: Midjourney), only one I immediately want to use – music. I have a Spotify account, which I can connect to via the Rabbithole portal, but it never works. I connect, I test, it does not work, I delete and retry… I keep on seeing the “I could not start up the Spotify rabbit” error message.
Journaling
The journal feature (saved voice notes, images, etc.) looks like it might have some value, but only if I can easily get on WiFi. Otherwise, just using my ‘phone is the way to go.
Carve out 20 minutes of your day and watch the excellent 99% AI-generated video essay ‘The Wizard of AI,’ Created by Alan Warburton and commissioned by Data as Culture at the Open Data Institute.
AI Tools Used:
Runway Gen 2 to generate 16:9 ‘AI Collaborator’ video clips.
Invites to test Bard and the New Bing arrived within 24 hours of each other. The Bard invite arrived first, and I must admit to being underwhelmed. Bard was boring. I had heard the rumors that Google’s secret AI was leaps and bounds ahead of OpenAI’s ChatGPT, convincing at least one engineer of sentience. However, the experience was largely dull.
What is the purpose of Bard?
As alterative to regular search, Bard does not immediately offer up a convincing reason to stick with its services. The results take a little longer to generate and do not contain URLs. When searching for places to eat in Chicago, I had to independently Google Bard’s text results. Bard suggested two excellent options that met my criteria, but then suggested options that made little sense. I can see one potential future here, and that is in Augmented Reality, where Bard is a competitor to Alexa – vocalizing responses to my spoken requests. But this is only going to have value if Bard can demonstrate accuracy and link to actual resources on the internet.
Welcome To The New Bing
New Bing is something else. It took a few clicks to access the new Bing (started up in Safari, did not like being in the Microsoft Edge Dev, but worked like a dream in the regular Edge) and it felt like I was in Las Vegas, which is both good and bad.
Conversational Style
I was impressed that the new Bing (NuBing?) suggested a choice of conversational style: Creative,Balanced, or Precise. Somewhat ironically, I found myself Googling how to try the new features.
Kitten and Dinosaur
AI image generation (Image Creator) is baked into chat and initially works surprisingly fast and well. I was unable to get a widescreen image even though Bing told me it could change the aspect ratio of the results, and my request for a “dinosaur riding a kitten” was churned out as a kitten riding a dinosaur. But it did it fast. On a day where ChatGPT was up and down (and lacking historical chats) this was particularly impressive. Subtly, Bing was counting up to a limit of 15 with each image request. With only a few credits left, I asked for an image of a kitten dressed as Judge Dredd. Bing Binged itself with a search of Wikipedia and spat out some acceptable results.
Judge Kitten
I have no idea if these search results are being piped into the image prompt, but I like to think they are.
So, I will definitely be using the New Bing. Bard, not so much.
For kicks, here are some of the images that Bing was able to create.
A steampunk armadillo
Kitten and Dinosaur 1
Kitten and Dinosaur 2
Kitten and Dinosaur 3
An image of James Moore (who works at DePaul University) riding on the back of a kitten
DiffusionBee threw up what looked like a Norse word cloud:
Do not go gentle into that good night
Unimpressed with this, I added a “by Banksy” style modifier to see if this created something more visually arresting. I guess it did. Messing about with styles (drawing, visual, pen, carving and etching, camera, color, emotions, style of an artist or community, CGI software, and CGI rendering) is where I may have to add more direction.
Banksy Style
So, I added a bunch of modifying styles. I then learned that DiffusionBee limits the number of text characters for the prompt. After removing a few, I ended up with this (Angry, Melancholic, Oil Paint, Dramatic, Surrealist):
I signed up and/or downloaded several AI image-generating services recently. For kicks, I have started to post poetry and descriptions from classic novels to see what the results are. I started the process using one of the most celebrated poems ever: Catullus 85:
Ōdī et amō. Quārē id faciam fortasse requīris.
Nesciŏ, sed fierī sentiō et excrucior.
There are many English translations and interpretations, so I went with Wikipedia:
I hate and I love. Why I do this, perhaps you ask.
I know not, but I feel it happening and I am tortured
It looks like this request may not follow our content policy.
So, I posted this into DALL-E. The word “torture” was flagged as not appropriate, so I went with Google’s stock translation (which was accepted):
I hate and I love. Wherefore I do this, perhaps you ask.
I do not know, but I feel it being done and I am tormented.
DALL-E generated 4 options:
Catullus 85 – 1
Catullus 85 – 2
Catullus 85 – 3
Catullus 85 – 4
Options one and two are cheerfully banal, but three and four have a slight spark. Option three is my winner. And DiffusionBee seems to follow the same tack, generating this one image from the original text (no issues, it seems, with the word torture):
Catullus 85 – DiffusionBee
Craiyon‘s output definitely felt more teenage angsty. Their AI obvious has the machine soul of a poet:
Catullus 85 – Craiyon
Will try again tomorrow with something completely different.
For the first time ever, the Distance Teaching and Learning conference went fully online. This is my online diary, and placeholder for things I need to return to in the future.
I must admit that I missed being in Madison this time of the year but found the online conference to be considerably more efficient. This efficiency did have a downside – I admit to being in a state of continuous partial attention as I fielded work calls and requests simultaneously.
Surprising, Slack became a vibrant and well used part of the conference. Participation in Twitter significantly declined, with far fewer #UWdtl posts, live tweeting, and side conversations this year. Slack was the place to be. Messy, information overload, and chaotic. But also humanizing, filling a gap for those started of physical interaction.
Interaction in the sessions via services like Poll Everywhere, Google Docs, and Google Slides was variable, but paid huge dividends when it worked. My advice to presenters in the future is:
Use an easy-to-type shortened URL (bitly) and have this on all sides during the interactive parts of the presentation.
Make sure to activate your tool of choice before the presentation starts.
I got to moderate some of the sessions on Tuesday and Wednesday, which gave me a glance behind the curtain. The majority of presentations used Zoom as the backend, with moderators and presenters in a Zoom breakout room. Video footage (speaker video and shared screen) was passed to Mediasite for participants to watch. Participants could type questions via Mediasite’s Q&A speech bubble, to be relayed to the humble moderator and then read out to the presenter. The tech team behind all this were exemplary – fielding issues and questions with quiet grace and authority. The more interactive sessions used Blackboard Collaborate, and here all could talk and chat simultaneously.
The majority of sessions were recorded, and these recording made available a few weeks after the conference finished. Making these recordings available is something I particularly appreciate, but it does not look as if many have taken advantage of this – the views for many sessions are in single figures at present (one session that I missed, but want to watch is “Measuring Engaged Learning in Online and Blended Courses”).
There were a few themes that seemed to bubble-up during the conference:
Understanding how to show caring for online students
My colleague Margaret Workman presented a great eposter (Can we meet all of the learning outcomes in an online laboratory class) in the morning. The eposters were the perfect format – three 15-minute sessions were repeated over 45 minutes. This meant that you could jump from eposter to eposter like a series of speed sessions. In the virtual environment, this worked very well indeed. I followed Margaret’s session with Steve VandenAvond’s eposter (Creating Your Own Reality: The development of In-House Interactive VR).
Newton Miller gave a barnstorming keynote that really kicked things up and set a tone that continued throughout the conference. Historically, the conference has been very white. Black and brown faces are not as representative at the conference, and this is not a good thing, particularly this year. Newton’s keynote and Q&A posed a series of considerations that are both timely and important.
Thomas Royce Wilson was well-prepared for his “Cranky Colleagues v. Killer Robots: Helping Others Embrace Technology” which provided a useful framework for effectively collaborating with colleagues who might be technology-averse.
Each day ended with a “live wrap up.” This helped to reinforce the sense of community and a cohesive set of programming. The wrap up was also used to share pictures from the daily hashtag competitions.
Wednesday
HyFlex was a significant theme at the conference. Brandon Taylor, Janyce Agruss, and Amy Haeger shared their experience of teaching in the HyFlex modality (360-Degree View: Shared Experiences of a HyFlex Course Design Pilot) – a modality that now seems to be featuring heavily in the pans of most colleges and universities.
Mary Ellen Dello Stritto presented on “Using Course-level Data for Research” and shared Oregon State University’s “Online Learning Efficacy Research Database.” The database is a “searchable resource of academic studies on the learning outcomes of online and/or hybrid education in comparison to face-to-face environments.” I will definitely be taking a look at this later.
Maria Widmer and Claire Barrett presented on “Strategies for Connection and Belonging in Online First-Year Seminars,” in which I was reminded of the usefulness of “jigsaw discussions.”
Jean Mandernach’s presentation on “Teach More Students Without Increasing Your Instructional Time” was particularly interesting, and something I plan to dig deeper into. She also recommended a book that looks like it could add some value (Attention Management: How to Create Success and Gain Productivity – Every Day).
Thursday
Constance Wanstreet presented on “Learning Analytics and Gateway Courses: Keys to Student Success.” I think there is a gap here that the conference could fill by offering a beginner’s guide to learning analytics, with separate audiences for educators and administrators.
Trey Martindale’s “Online Learning and the Next Few Years in Higher Education: Follow the Money” was the highlight of the day. Not the happiest of analysis but argued well and definitely of value.
Tanya Joosten presented on “Empirical Approach to Identifying Digital Learning Innovation Trends.” Those trends are helpfully contained here, with more Information on the DETA site.
Oliver Dreon ran an engaging discussion (in Blackboard Collaborate) on “Researching online students’ perceptions.” I don’t know if this is a trend, but some institutions are moving away from using the QM rubric (which has a cost) to the (free) OSCQR (SUNY Online Course Quality Review Rubric). One thing I plan to investigate later came is this discussion
The conference was surprisingly emotional – the feedback that I saw shared highlighted the sense of connectedness this year. Many attendees found the virtual format to be more efficient and productive. I don’t know how much of this structure will be used in future conferences, but I see the future as being more blended.