Ability to highlight silent moments in video/images
on our radar
M
Marco Torrente
Highlight and tag silent parts of videos or images to add important context that is visually shown
Log In
Jazmin Taheri
Merged in a post:
Ability to analyse videos from observational studies
M
Mads Bille Eriksen
I am mixing findings from interviews and observations in my studies. Recently, a user guided me through a procedure on their factory floor. I want the ability to analyse the footage from such an observation by adding time-marked findings that may not be apparent from the transcript. Further, I want to analyse findings alongside interview findings.
Pat Barlow
Hiya Mads Bille Eriksen, thanks for this post! I have a few more questions for you:
- Could you provide more details on how you envision the time-marked findings feature to work?
- What specific information are you hoping to extract from the video footage that isn't apparent from the transcript?
- How would you like the findings from the video analysis to be integrated with the interview findings?
M
Mads Bille Eriksen
Pat Barlow I just recently started using Dovetail - keep that in mind.
- I envision that I play through the video, press pause and note a finding that will then be linked to the timing in the video. The findings could be
- Could be the following:
2.1 observations about the user of a physical product - "the user dismantles the button when she does not hear a click from the button" or
2.2 observations about the context of use "the operating room is so noisy that it is hard to speak" or
2.3 observations like "the user uses three layers of gloves when pressing on the tablet screen - chainmail gloves, insulating cotton gloves and nitrile plastic gloves".
- I would like to cluster findings from the video analysis with interview findings.
Let me know if you'd like more info
Jazmin Taheri
Merged in a post:
Adding observations to the transcript
J
Justina Keldusyte
It would be great to have a good way to add observations to the transcript, my main use-case would be usability studies where users share their screen and do tasks. Adding a way to have interviewer/note-taker observations of users actions would make the analysis easier, for example, when working in the highlight canvas - now the only way to know what the user did on the screen while vocalising feedback is to go into the individual video, which makes it more difficult to look across users. I would like to observations to the parts of the transcript where they happened in the video so they could be folded into the highlight in this format: the user does x on the screen (observation) + what user is saying at the time. Ideally it would also note, who made the observation, especially if the note-taker and tagger are not the same person. One of the Dovetail Champions members suggested a work-around, but it would be better if there was an actual solution for this on the platform.
My original question and suggested work-around: https://heydovetail.slack.com/archives/C014TC5JMRR/p1727800672828979
Pat Barlow
Thank you for posting, Justina Keldusyte! I have a few more questions for you:
- Can you provide more details on how you envision the observation feature to work in terms of user interaction?
- What specific information would you like to be included in the observation notes, apart from who made the observation and what the user was doing?
- Could you elaborate on the limitations of the suggested work-around and how you think an in-built feature could overcome these?
J
Justina Keldusyte
Hey, Pat Barlow! Thanks for getting back to me :) To answer your questions:
- I suppose not dissimilar to tagging, but instead of highlighting text already in the transcript i would be able to enter text at a relevant point in the transcript and it would be automatically tagged with my name (from my log-in information) & observation tag or directly entered as a special data type, so that I could look at just observations in my analysis, or just observations by a specific observer if there are more than one. Let me know if this doesn't make sense :D
- The two you've mentioned: observer's name and what the user was doing are the most important, additionally a tag that it's an observation would be great or some other way to clearly identify this as not something that was part of a transcript and to make it possible to review all observations
- There are a few:
- The observation, once added using the workaround, is not easy to identify in the text, unless I create and apply a tag to it myself - it would be much easier if that was done automatically, like what if I forget to do it? Also, the transcript then has extra text, which was not part of the conversation - not great from a research methods perspective. Also I want to be able to just look at observations by themselves as well as next to the bits of transcript it relates to. The more i think about this, observations should probably be it's own data type.
J
Justina Keldusyte
- In a related way to may first point: if i add additional text to the transcript, even if it's all tagged, and then apply magic summary for example, what i added would be treated the same as the rest of the transcript and become part of the summary. Also in the highlight canvas - if I cluster by tag and "observation" is just another tag that messes up the groupings.
- The workaround is more difficult to explain/keep consistent in the team - they might use different kind of brackets, forget to tag or don't really add observations, because it feels like you're editing the transcript and that's a big ick for any researcher.
- I think overall it would make it easier enhance the data - with not only allowing to add factual observations ("user clicked x"), but also other behavioural and emotional dimensions - like a "user sounds frustrated" or "they seem pleasantly surprised" - all of the things that are super obvious in the video, but not part of the transcript, which makes it difficult to take into account when analysing data
A
Arvind Venkataramani
Oh yes, this is super useful to have for behavioral research, which is not the
only
thing I do as a UXR but it is a lot of it.Jazmin Taheri
Merged in a post:
Tag photos
J
Jordan Smith
we would like the ability to tag photos the same way we do text. Ideally, you could drop a box or circle onto the specific area of the photo and then leave a tag that would apply to that part of the photo such as product name, pain point, etc.
S
Steffen Kautz
In a similar vein, I would love a view where the video is the main focus and the transcript runs along the side, so I can focus on non-verbal observations more easily, but still highlight them. I find it's often a combination of both that makes a good highlight
J
Jonathan Prisant
Yes please! One large part of my research includes screenshots of websites, UX patterns, experience messaging, customer artifacts, etc... I would love to be able to more thoughtfully curate those resources in Insights and various views.
Jazmin Taheri
on our radar
O
Olivia Harold
I type notes into the transcript with square brackets around them, then mark them with a tag named "notes"
Load More
→