The papers differ in that the paper written by Réhman et al. has a clear focus on a practical application to which the proposed design could be used, that is, providing real-time information about a soccer game. The paper written by Laitinen et al. is more vauge, and argues that getting spatial information of virtual worlds through a sound display has various applications, but the technology is not aimed at any specific example.
Evaluation of Media Technologies
Both papers ground their evaluation methods in standards. The paper by Réhman et al. uses ISO recomndations to develop a evaluation scheme that measures three aspects of usability: Effectivness, Efficiency and Satisfaction. The other paper evaluates the design with a formal listening test, conducted in a listening room following recommendations for listening test where small changes in sound are produced(http://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1116-1-199710-I!!PDF-E.pdf).
It's interesting to see that both papers choose to base their evaluations to some extent on proposed standards, and this probably gives the results some credibility since they are more likely to be comparable to similar research.
Both papers used some form of user evaluation as their method of evaluation. This can of course be tricky when testing novel designs, since users have no previous experience of the designs. This might be the reason for both papers to include some form of semi-informal user training in their evaluation to give the users the opportunity to familiarize themselves with the technology to be evaluated.
Communicating Design Research
To express in a paper what a design consists of can be challenging and to me this is espessially noticable in the paper about Spatial Sound. This since sound design is really hard to grasp without acctually experiencing it. The paper relies heavily on the knowledge of the reader, so that for example the reader through a matematical explanation of how sound intensity varies through spatial parameters can understand how the design works. For me, being not an expert in acoustics, it proved challenging to get a mental picture of the design as such. The other paper had an easier task of describing the design since the vibration output of a cellphone is something that I'm familiar with, and also something that has fewer parameters that vary. I understand how the design is suposed to work, and I can picture to my self how the different vibrations would feel.
Conclusions
The two papers where similar in their approach to design research, and followed the scheme of prototype developing followed by user testing. Compared to the article that was discussed in last weeks theme, they presented a research methodology that was more in the scope of to what I'm used to. The conclusions and discussions of the papers where, due to this approach, more focused on the success of the prototype, and values measured in the user tests. For this kind of research it seems to me that close focus has to be paid to the arguments for why the prototype should be developed, since a successful user test may be conducted even if the prototype isn't useful in any real-life scenario.
Gustav, in your opinion what part of design research is the most difficult? Coming up with the idea or evaluation of the work the has done?
SvaraRaderaComming up with a "good" idea is probably the most difficult part. The evaluation is most likely not so hard to do if you have a well defined purpose of what your design is to accomplish, which in some ways is a vital part of a good idea.
SvaraRadera