All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
of 16

Please download to get full document.

View again

Association for Information Systems AIS Electronic Library (AISeL) ICIS 2009 Proceedings International Conference on Information Systems (ICIS) 1-1-2009 Open Innovation: An Empirical Study of Online Contests Yang Yang Temple University, yangyang@temple.edu Pei-Yu Chen Temple University, pychen@temple.edu Paul Pavlou Temple University, paul.pavlou@temple.edu Recommended Citation Yang, Yang; Chen, Pei-Yu; and Pavlou, Paul, Open Innovation: An Empirical Study of Online Contests (2009). ICIS
   Association for Information Systems  AIS Electronic Library (AISeL) ICIS 2009 ProceedingsInternational Conference on Information Systems(ICIS)1-1-2009 Open Innovation: An Empirical Study of OnlineContests  Yang Yang  Temple University  , yangyang@temple.edu Pei-Yu Chen Temple University  , pychen@temple.edu Paul Pavlou Temple University  , paul.pavlou@temple.edu This material is brought to you by the International Conference on Information Systems (ICIS) at AIS Electronic Library (AISeL). It has beenaccepted for inclusion in ICIS 2009 Proceedings by an authorized administrator of AIS Electronic Library (AISeL). For more information, pleasecontactelibrary@aisnet.org. Recommended Citation  Yang, Yang; Chen, Pei-Yu; and Pavlou, Paul, Open Innovation: An Empirical Study of Online Contests (2009).  ICIS 2009 Proceedings. Paper 13.http://aisel.aisnet.org/icis2009/13  Thirtieth International Conference on Information Systems, Phoenix 2009 1   O PEN I NNOVATION :   A N E MPIRICAL S TUDY OF O NLINE C ONTESTS Completed Research Paper  Yang Yang Pei-yu Chen Paul Pavlou Fox School of Business and ManagementTemple UniversityPhiladelphia, PA U.S.Ayangyang@temple.edu pychen@temple.edu paul.pavlou@temple.edu  Abstract Online contests for open innovation – seekers posting innovation projects to which solvers submit solutions – have been developed into a new online commerce model. This study is one of the first to lift the veil of online contests. We identify that real world online contests are very different fromwhat is assumed by previous studies. A real world online contest has uncertain number of solversdue to dynamic participation process. Feedback can encourage solvers to contribute more thanthe equilibrium effort. With a given award, if the seeker's feedback effort is high enough, theemerging number of solvers is a proxy measure of contest performance. By examining large-scaledata from an online contest marketplace, we find that a contest with higher award, longer duration, shorter description, lower time cost, and higher popularity will attract more solvers.Specifically simple and ideation based projects are the most efficient in capturing solvers. Keywords: online contest, open innovation, feedback, contest, online marketplace Introduction Investment returns in R&D and innovation are one of the most important sources of future market value for firmstoday (Hall et al. 2005). Accordingly, the firm’s investment strategy for R&D and innovation is very important. Themost common approach is internal R&D projects , by which teams of developers within the firm seek solutions forinnovation projects as scheduled. However, since the success of internal R&D projects cannot be guaranteed, firmsare exposed to the risk of R&D failures. Also due to the team scale limitation, efficiency and outcome is difficult tobe largely improved. In recent years, another approach called open innovation has emerged (Chesbrough 2003, vonHippel 2005, Terwiesch and Ulrich 2008). This approach to open innovation relies on the undefined public from theoutside world for solutions. An appealing feature of this open approach is that innovation seekers only need to payfor the success, not the failure of innovation projects. So investment returns could be much higher. Besides, apotentially larger pool of innovators (solvers) may facilitate faster and better innovation outcomes with lower costthan internal projects. Since the winning solution is typically the best one that survives after a highly intensivecompetition in the real world, the outcome is naturally very competitive in the market. During recent years, manylarge firms are adopting open innovation to better leverage R&D expenditures. For instance, in September 2007,Proctor and Gamble (P&G) launched an open innovation contest and finally “at least one of the final four who madeit to the Procter and Gamble presentation has discovered a breakthrough in the fabric care marketplace that has gotP&G very excited. If it comes to market it will be a win, win scenario for P&G, the design firm and millions of consumers” (Horn 2008). In September 2008, Google funded a $10M launch of an open innovation contest. Theproject was called Project 10^100 and was looking for new ideas. During a 2-month period, Google had receivedover 154,000 submissions from all over the world. In the future, we expect that more firms will adopt openinnovation to mitigate the risks of internal projects and identify solutions from across the globe.  Web-based Information Systems and Applications 2 Thirtieth International Conference on Information Systems, Phoenix 2009 By taking advantage of the Internet, open innovation seekers can reach large pool of potential solvers with low costand possibly better solutions.  InnoCentive, founded in 2001, is the first online marketplace in the world to host openinnovation projects, in form of contests (Allio 2004). It was srcinally built to facilitate seeking for innovativemedicine solutions. For now, as an emerging result, a variety of projects are posted there, ranging from websiteLOGO design, algorithm design to complex project such as construction design. The potential seeker could be anindividual, a firm, or any parties. Numerous marketplaces, such as Topcoder, and TaskCN are using online contestsfor open innovation projects.A contest is a type of game in which several agents spend resources in order to win one or more prizes (Moldovanuand Sela 2001). The first contest model was done by Lazear and Rosen (1981). They propose a simple contest modelwith only two competitors in the pool to see how to set the optimal prize structure to stimulate the best output. Inmost contest studies, information is complete, contest is one-stage, and the contest performance is evaluated in onedimension such as quality or quantity (Lazear and Rosen 1981, Moldovanu and Sela 2001, Terwiesch and Loch2004, and etc). One important finding is that having many solvers work on an innovation contest will lead to a lowerequilibrium effort for each solver in the contest model, which is undesirable by seekers. Recently, Loch et al. (2006)discuss different problem types in product development and suggest that performance evaluation should be modeledwith multi-dimensions instead of one dimension. Terwiesch and Xu (2008) may be the first to expand contestresearch scope to open innovation field. Their uniqueness is dividing projects into three dimensions: ideation based,expertise based and trial-and-error projects 1 . They find that seekers will benefit from having more solvers due tomore diversified solutions, which can mitigate and sometimes outweigh the effect of underinvestment from eachsolver.Until now, most studies of contests have been theoretical ones and have mainly focused on the optimal design of award structure. Especially compared to research of other Internet based transactional activities such as onlineshopping, online auction and reverse auction, the field understanding of online contests is very limited. For an onlinecontest, the seeker needs to make decisions more than just award structure design. For instance, a seeker also needsto consider duration, start date, project description details and collaboration strategies before launching a contest.Every variation of these factors can impact the final performance. Unfortunately most of these influential factorshave not been studied yet. For example, how many solvers will a contest have? How do duration and award impactthe contest performance? How should a seeker collaborate with solvers? Our study aims to give answers to thesequestions and to provide instructions to innovation seekers pertaining to how to set up an online contest to maximizeinnovation performance.The uniqueness of this study is that we have an opportunity to examine open innovation contest with large-scaleempirical data from an online marketplace. We find that real world online contests are very different from traditionalones or the ones assumed by previous studies. A real world online contest has uncertain number of solvers due todynamic participation process and publicly observable submissions. Especially when seekers collaborate withsolvers by providing feedbacks, the award probabilistic discounting effect can be largely reduced, and solvers wouldlike to pay much more efforts than the equilibrium effort. With a given award, if the seeker's feedback effort isenough to cover all preferred solvers, the performance can be measured by the emerging number of solvers. At lastwe get a prediction model for the emerging number of solvers and shows that a contest with higher award, longerduration, shorter description, lower time cost, and higher popularity will have more emerging solvers. Namingprojects, which have low expertise requirement and low time cost, are most efficient in the marketplace. Bothduration and project type can moderate the impact of award to number of solvers. Marketplace maturity also matters,although our result indicates that the marketplace we have studied shows negative network effect or negative solverpopulation growth.The rest of this paper is organized as follows. First, we give an introduction to the process of real world onlinecontests and identify some key differences between online contests and traditional contests. Second, we present ourperformance model including feedback impact. Next, we develop hypotheses and a model to predict the number of solvers which is a proxy measure of contest performance. Then we test the hypotheses with data. Finally, discussion 1 According to Terwisech and Xu's definitions,  Ideation based projects are problems looking for innovative ideas. Itcould be as simple as a name to a new company, or designing a LOGO for a website.  Expertise based project usuallyrequires some specific expertise which is not common. Software development is a typical expertise based project. Trial-and-error  projects are innovative problems with very rugged solution landscape. Solvers couldn’t know theresult without trials.  Yang et. al. / Open Innovation: An Empirical Study of Online ContestsThirtieth International Conference on Information Systems, Phoenix 2009 3   and implications are given. Limitations and future research is also discussed. Real World Online Contest Previous studies are mostly assuming the situation of traditional or offline contests. Although some studies talk about online contests, most assumptions are still based on traditional cases (Terwiesch and Xu 2008 and etc). Beforeproceed to our study, it's necessary for us to introduce the process of online contests in the real world and the keydifferences compared to traditional contests.In a third party hosted online contest, there are usually three parties: an innovation seeker, many solvers, and themarketplace. A typical work flow of a one-stage online contest is as showed in Table 1. Table 1. Work Flow of Online Contests Step Innovation Seekers Solvers MarketplacePosting Give project description, setnumber of winners, awardamount, open duration (howlong the contest will be opento accept submissions);Make full award depositExpertise-matched solversare notified if appropriatenew projects are published.Solvers can browse orsearch for qualifiedprojects.A specific customer servicerepresentative (CSR) is assigned toeach contest. The CSR will helplist the project in an appropriatecategory or re-organize the project;Confirm full payment of awardsand initiate the contest.Bidding Wait for solvers to join thecontest.Invite solvers.Review project, and decideif join the contest.Once joining, solvers cancontact seekers by email orprivate message system.Feedback Provide qualitative feedbacksto preferred submissions.Submit solutions, getfeedback and makeimprovement.Some solvers fail to submitanything finally.Undesirable submissions such astotally wrong or empty submissionsare eliminated.Awarding Choose winnersProjects end once thewinners are chosen.Winners receive award.The rest participatedsolvers can report to theCSR, if any awarding issuspicious. (e.g. the winneris an alias of seeker)Review the selected winners,checks suspicious report, sends80% payment(s) to selectedwinner(s), and transfer IPR to theseeker. 20% award is deducted asprofit.Extending If a seeker is not satisfiedwith all submissions, she 2  hasthe option of extending thisproject for more days byadding awards. Then goesback to step 2.Go back to step 2. Evaluate the extension request andextend the project.Evaluating Seekers have option to giveevaluation feedback to theperformance in scales of negative, neutral or satisfied.Winners can leavefeedback to the seeker.If no feedback is given, and if awinner is selected, the system willcreate the feedback as “satisfied”mutually. 2 In this paper, for convenience of statement, we call a seeker “she”, and a solver “he”.
Similar documents
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks