Better review system
under review
G
Glen Lipka
Sites like Quora and Stack Exchange have robust reputation systems. Reviews on ADPList now are almost completely positive. People do not want to antagonize a mentor. Anonymity is not available like Glassdoor to provide anonymous feedback.
The goal is to have mentors/mentees differentiated to better help find good matches.
This is a complicated topic, but a robust reputation system would need to find a good balance to work.
My suggestion would be to have several facets of a review for both the mentor and mentee which they grade the other on a 1-5 system.
For mentors you have facets like friendly, knowledgeable, technique, strategy, process, testing, networking, big company/startup, and other qualities. Each with a 1-5 grade.
For mentees, you have creativity, energy, smarts, etc with the same grading.
Also a text area for details.
Lastly an option to make it anonymous.
After every grading, it would not register the review for a random number of days between 14 and 30. That way it would be harder to know who said what protecting anonymity.
Obviously, the design of this can go in many directions. This is just a strawman to illustrate the idea.
Martina Mut
Thanks for bringing this up. I recently had a mentoring session that left much to be desired. Throughout the session, the mentee kept their camera off and seemed disengaged, yawning repeatedly. Despite my profile clearly outlining my areas of expertise, they expected a working session, wanting to showcase their Figma work and make live edits.
Unfortunately, the session lacked direction and was overall unclear in its goals. What's more frustrating is that while I can rate the mentee, I prefer not to leave negative feedback on their profile as it's public.
Additionally, I experienced another no-show recently, and despite reaching out, received no response. As mentors, we invest our time and effort, so it's disheartening when the commitment isn't reciprocated.
While I'm unsure of the solution, it's evident that ADPlist needs to address these issues. There's definitely room for improvement.
M
Melissa Schmitz
Honestly I think other areas of the platform need to be improved well before anything like this is considered. Most of the time, even if I have sessions available, I don't appear in search for the topic I mentor for, only if someone searches for my name or specific title. Most of my mentees come from external sources. This leads to mentees booking based on who appears in search, not the type of mentor they're really searching for. I've had the same issue when searching as a mentee, myself.
I've had plenty of mentees book me for general appointments that don't address anything related to what I tell them I can help them with in my profile. Oftentimes the problem is that they're vague when I try to ask them questions before the session. Then I try to hear them out and do my best to help them, but it overall becomes a bad (or at least, "meh") session because it was a bad matchup.
Another example; I recently had a group session that didn't work at all. It just gave me and everyone else an error when I had over a hundred people RSVP'd. It turned out that the platform was down, preventing the session from starting. I got messages from people frustrated about the session. Now imagine if those people could rate me as a mentor... how would that help anything?
What about mentees I've met with who ended the call by trying to market their services to me unsolicited? Would they be allowed to retaliate if I give them a bad review?
So rather than making this a Yelp for human beings, why can't we just focus on (1) a better matching system or (2) reminding people to give the current reviews? I really fail to see how specific ratings would be helpful at this stage of the ADPList product.
As a compromise, perhaps start with a non-public metrics that a mentor can review? For example, "Would you book this mentor again?" or "Would you recommend this mentor?" followed up with "Why?" If someone gets too many "No" responses maybe review the whys and send them a warning/advice email, if needed. But as others have pointed out, having to uphold a public number rating on something so subjective is unnecessarily stressful on a platform upheld by good will.
But still. I think this would only make things worse. Even if something is anonymous, depending on how many mentees you have you may be able to figure out who it is anyway. Just keep a good thing positive.
L
Laura Timmins
I'd also like to see this feature reviewed - as mentors are giving their time and expertise for free, I feel discouraged that after a positive review my rating has potential to drop purely because someone said the session was 'Good' rather than 'Amazing!'. It would be better to be transparent when leaving a review of what factors into the star score, because some people are never the types to say that something was 'Amazing', but may have been satisfied with the session and found that the mentor hit all of the goals they were asked. As someone else has said, from a reputation stand-point I'd be less likely to use the platform if my rating dropped too much from one or two 'less than perfect' sessions (time constraints, lack of info from the mentee to prep effectively, a session just being an 'intro' etc can all factor into this).
I hugely want to learn and grow as a mentor myself, so the opportunity to provide public feedback vs anonymous private points on what I could improve on would be invaluable.
R
Rachel
I was drawn to David Montenegro's comment, in particular about how behavioral economics studies demonstrate that maintaining a "contract" with less constraint when the relation is grounded in goodwill returns better results because it's seen less like an economical transaction (even if here there is no money involved).
I have been on both sides of the board as mentee and mentor.
On one hand, I can see from a mentee's standpoint that it would help them filter through higher starred mentors to access advice.
This may also provide a higher quality selection of mentors on the platform.
On the other hand, I would suggest the consideration of benefits vs risks that star ratings provides a mentor. I agree that mentors do not require quantifying reviews as the comment-based reviews are sufficient enough to improve the approach or provide an understanding of a mentor's personal performance and areas for improvement. In fact, it may start to become a pain point.
As David has mentioned, because this is an act of paying it forward, mentors may be less inclined to utilise the platform as there is a need to maintain 'appearances' in the form of star ratings as this too affects the public's perception of them.
Trust is a large element to consider, as mentors volunteer their personal experiences and public career profile and may now feel the need to withhold critical elements of their advice to the mentees in favour of maintaining high ratings.
As a mentee, I'd like to hear the advice on the reality of the industry and receive critical information that will build my skills and decisions.
As a mentor, it may begin to feel like volunteering offers more risk than reward, as this is not done for financial gain and simply a give-back deed.
While pressure to perform on some level is important, pressure may also change the form of advice provided with mentors to work in favour of the system and change the way they engage with the platform. Whether positively or negatively, I do not have the answers or your mentor retention rates.
I agree that a feedback mechanism is important, but I would strongly suggest testing this further with a sample of mentors and mentees, and understanding their sentiments and push/pull elements associated with having this system.
Perhaps exploring platforms that have other unique power-dynamics / that is volunteer based, may provide ideas of a more inclusive system, such as a house-sitters platform, where both parties have equally complex stakes.
J
Jessica Smith
I think this is a good problem to solve in a community platform. If this is on the roadmap or strongly being considered, I'd urge the designers and POs to please consider the misuse or abuse cases of this. Some people will use this honestly and in a transparent way with good will. Some may not.
So the best approaches to solving for reliable reviews will include considering how review, reliability, and reputation ranking may be misused.
Serin Paul
+1 this will help us get the right feedback as a mentor
Also making it something like
What worked
What went wrong
What could be improved
This will force the mentee to share the rit feedback
D
David Montenegro
I agree with glen about anonymity, it's crucial imho.
I'd share my 2c about ratings.
Having specific traits to rate might be more accurate but it might also backfire (I was - am - there).
I'm taking this course with tutors and mentors and they seem more concerned about meeting those 3 criteria than actually providing specific support. Of course there are some relevant differences, first of all the concern to sticking to a contract they are paid to fulfill.
However behavioral economics studies demonstrate that maintaining a "contract" with less constraint when the relation is grounded in goodwill returns better results because it's seen less like an economical transaction (even if here there is no money involved).
I see that more specific data would give more tools to measure, but I think that in this circumstance the feedback should be exclusively qualitative; any attempt of quantifying something so fuzzy without a standard might lead to some kind of aberration in the measurements.
I don't need that specific information; the verbalized experience of the mentees is all I'm interested to know.
Cheers
David
p.s.
The feedback topic is huge and complex; I'd be happy to contribute how I can.
Felix Lee
under review