Fear not, my few loyal readers (that I can only hope exist) and many passers-by, the towel isn’t about to be thrown in on this blog, but rather this is a post about when giving up on something you’ve designed is a better plan than not.
In coding projects there’s generally a tipping point for the more complex ones, and that tipping point is betwixt two goals: patching the code to fix a problem versus scrapping the code and doing a fundamental re-write. Writing projects of most varieties often work the same way, where eventually when you aren’t getting the result you want sometimes it’s just better to throw it all out and start over. What “all” means, though, can vary in a gaming context quite dramatically, from an entire campaign right down to a single problem. Like most people, I find that scrapping concepts is extremely hard once you’ve begun to invest energy in them. Psychologically, this is known as the sunk cost fallacy, and it results when people are willing to throw good money after bad because “not getting a result after investing so much already would mean the investment went to waste!” We can see this around us every day, from stock traders who won’t admit a stock is just going to keep going down and it would be better to jump ship while they can still make back a fair amount of their investment, even at a loss, all the way to wars where because lives have already been lost in pursuit of a goal more must be spent so those losses weren’t a meaningless mistake (note: I’m not referring to any war in particular here; I have no opinion on such current ongoing wars, but such things have definitely occured in the past at the very least). Yes, indeed, this even happens at the gaming table.
The most iconic example of this problem that I’ve encountered is an iconic example at all primarily because I was something of a third party, and so I could recognize the sucking black hole of a fallacy from the outside, rather than inside of its event horizon where it is much more difficult to detect. An acquaintance of mine had designed, for my use, a simple puzzle based on the premise of the characters of my RPG campaign at the time taking water from one basin, depositing it in the other basin, and thence from that basin back to the original, so causing the water of both basins to be mixed with that of their opposite. The problem here was that, besides one basin containing hot water and the other cold, there was absolutely no clue as to the fact that this might be the solution except the very most tenuous story link using information that the players were not yet aware of. In other words, it’s one of those “guess what the GM is thinking” puzzles. As I was running the puzzle I looked on in dissatisfaction at how hard a time the players were having, and after some minutes wasted with plans that came close by coincidence (mixing water from one into the other, but not the other into the first as well), eventually I simply recognized the puzzle for what it was – a bad one – and had to step back, break suspension of disbelief, and explain to the players that I had in fact gotten this one wrong and it was a stinker, sorry for wasting your time, let’s just skip right to the good bit where you’ve just solved it.
Had the puzzle been of my own devising I’d like to think I would have avoided the mistake in the first place, but, no matter what inflated opinion I may hold of myself, eventually we all screw up, even on things that we think we’re good at, not unlike rolling a natural 1 and getting a critical failure. Although the campaign I was running at the time was an open sandbox game centered on exploration, meaning the players could’ve taken off and gone elsewhere, ignoring the bad puzzle for what it was (a waste of time), I still took the time to note my error, explained to the players that I was sorry that one kind of sucked, and that I’d do better in the future, and then by way of making it up to them I gave them the solution and the swag that went with it (a powerful magic felling axe, which became the favored weapon of one of my most engaged and frequent players, along with a bit of cryptic exposition from a magic mouth).
The lesson in all of this? Don’t be afraid to give a serious critical eye to your own creations, and try to be very critical of when you have screwed the metaphorical pooch. What’s more, don’t be afraid to simply cut your losses rather than trying to endlessly patch the situation like a leaky boat. Remember that it becomes harder to be critical as things grow – a session is easier to cut short than a campaign – and that things of your own devising suffer an equally great clouding effect on your judgment. A lot of the time some creativity and elbow grease can salvage a bad situation and even turn it into a winner, but sometimes it’s better to just throw out your leaky boat and buy a new one instead of constantly buying rolls of duct tape.
Until Next Time,
The Hydra DM
This is a straight-up response post. I had too much to say to submit a comment to this article by The Id DM, and so here are my thoughts.
I feel it’s necessary, first of all, to explain my unique point of view that would make such a thing worth expanding beyond a simple comment and worth the time spent reading. I’m one of the few GMs that runs games almost exclusively online. Play by post games a bit, but more often combined with what I excel at, which is running games on virtual tabletops in real time. For a while now, my drink of choice has been Map Tool, and the majority of my experience has been with D&D 4th edition. The part about a virtual tabletop that makes my position unique, however, isn’t my credentials. What makes a VTT unique is that its random number generating functions, practically speaking as random as the best gaming dice outside of Vegas, are not even a little bit concerned with the logistics of dice rolling. It’s the work of mere moments to type something to the effect of /roll d20+d18+d74+24+108+1d12r3*d100 to get results from a mechanic that was surely so complicated it would’ve been laughed out of FATAL.
(For those curious, that would be equal to a brutal 2 d12 (reroll 1s and 2s) times a d% resulting in a quantity added to the static modifiers 24 and 108 in addition to a d20, d18, and d74 for good measure. Having just done it I got a result of 481).
The fact that the above dice expression can be calculated faster than you can type it, and you can type it in under ten seconds, coupled with the potential for linking to a button to repeat as often as I like, means that dice logistics are absolutely meaningless on a VTT; that’s right, dice logistics on a VTT are like the points on Whose Line. With that in mind, let’s examine the questions Iddy posed to the readers:
What are damage dice even FOR? What do they do for the game?
Damage dice, simply put, are the analog to “to-hit”s digital. Well, okay, they’re pretending to be analog, but they do a much more convincing job of things (not least of all because analog circuits often use ranges to represent discrete values rather than actually being used as the continuum they are). In some games, like Ars Magica or Spirit of the Century, the “to-hit” expression is a determining factor in damage every single time, not just on a critical hit. If you exceed the DC more you do more damage – makes sense, right? Not so in D&D, which is why having a more analog array of damage possibilities is important. Why is this sort of “analog” nature important in the first place? A few reasons.
- A game that involves very digital quantities, that is “on or off” quantities, is easy to predict and easy to “game”. Dice were purposely included in D&D as an element of luck, or fate, or whatever. I’ll get into that later, but for now all that matters is that it’s the case. A lot of people have said “I’d rather instead of a minion doing 6 damage on a hit it just did 3 damage automatically and that’s that”. Now that’s great for conserving your wrist muscles and saving some time at the table, but it makes the system eminently more game-able, which is something that was supposed to be avoided in the first place. Changing the amount of dice allows wider or narrower ranges of predictability to taste.
- A “feel-good” roll, or at least more of one than “to-hit” is, is good for player (and GM) morale. If you roll a 1 on your attack 3 turns in a row that’s some serious sour grapes, but if you roll a 1 on your damage 3 turns in a row while it may suck compared to max damage at least you haven’t missed and wasted your turn. Damage is the “everybody’s a winner” roll, even if some people are bigger winners than others.
- Finally, it should be noted that all of D&D’s original offensive and defensive jargon originated from an attempt to make a fun and fast facsimile of combat in a fantasy environment with too many strange variables to account for. Gygax, being an avid fan of tactical miniatures games, could’ve easily included more sophisticated rules, and said as much in his time on ENWorld, but felt that it was important to not do so. That to have a score that represents blocking, parrying, dodging, etc. (AC) against a roll for effectiveness (“to-hit”), and thence to have an abstract representation of physical hardiness, luck, and skill in slipping blows that seemed destined to connect (“HP”) to shield you against the actual damage if it is not entirely avoided would be sufficient.
Regarding point #3, the system could’ve just as easily been “did the blow hit you? Okay, you died because you have a sword in your gut”, but that was deemed, I suppose, too abstract. The “effective/not” and “degree of damage” system was selected as a good middle ground. It should also be noted that in Gygax’s eyes an RPG without dice would be more amateur theater than an RPG, so we should also keep that in mind whether we agree or not.
Why are we as DMs spending precious time calculating complicated damage rolls that can take over eight dice plus a static modifier to compute?
In this instance I am proud to say that, in fact, I am not spending my precious time calculating these; the VTT is doing all the calculating and it can do this trick in the blink of an eye where it would take me a few seconds at least. Presumably we are doing it for the above reasons, though for me the doing it is much easier than for people using pencil and paper with real dice.
Why not build monsters that deliver specific damage based on whatever attack they use against the player character(s)?
As covered in question 1, this is because it would be not only too gamey, but also treading dangerously close to “not an RPG”, much less “not D&D”, in the eyes of the creator. In terms of monsters, of course, GMs probably appreciate that feel-good roll, too, though likely less than players do since GMs get to roll a lot more attacks than the players do, so being on a cold streak doesn’t hurt so much.
Why is the SOP in D&D one roll for attack and another for damage?
This is where I shall, as referenced above, call on the great creator. Typos and other such things have been preserved for posterity:
AC is the measure of how difficult it is to make an effective attack on a target subject. One might broaden it by including dodging and parrying, but those are subsumed in the single number, as is indicated by the addition of Dex bonus, thus obviating the need for a lot of additional adjustments and dice rolling. The game is not a combat simulation, after all.
Hit points for characters are a combination of actual physical health and the character’s skill in avoiding serious harm from attacks aimed at him that actually hit. This is a further measure of the defender’s increasing ability to slip blows and dodge, as mentioned above in regards AC. While AC increases mainly by the wearing of superior protectionm HPs increase with the character’s accumulating experience in combat reflected by level increase.
In combination the two give a base protection and survivability for the beginning character and allow that base to increase as the character increases in experience. It does not pretend to realism, but it does reflect the effects of increasing skill in a relatively accurate manner while avoiding tedious simulation-oriented considerations and endless dice rolling.
As someone who has designed a number of military miniatures rules sets, I could have made combat in the OAD&D game far more complex, including all manner of considerations for footing, elevation of the opponents, capacity to dodge, parrying skill, opponents using natural weapons, etc. Knowing that the game was not all about combat, I skipped as much of that as I could by having the main factors subsume lessers, ignoring the rest. It is a role-playing exercise where all manner of other game considerations come into play, not just fighting.
Oh, least I forget, when magic is mixed into the formula, getting anything vaguely resembling reality becomes wholly problematical 😉
There you have it, straight from the horse’s mouth: precisely why D&D’s SOP is two rolls: to-hit, then damage. The TL;DR version (if such a thing could exist for a quote of Gary’s) is above under question 1 as reason #3.
How is it helpful to read about unique home rules by WotC staff who were prominent in building, designing and playtesting the game system?
I’ll answer this with another quote from Gary:
That calls to mind the incident that occurred when I was giving a seminar on AD&D to a large audience of dedicated players at a GenCon. Someone asked me howI’d handle a specific situation, and I responded. One fellow in the crowd objected, ‘but that isn’t what the DMG says…’
To that I respnded to this effect: ‘I don’t care what the book says. I wrote it, and I am not infalable. In the case just before us the material in the DMG is wrong–as it is anytime the DM over-rules it.’
The WotC staff are just as, if not more, fallable than the original. We may have learned a lot about RPGs and designing them since Gary’s pioneering journey into uncharted territory all those decades ago, but I’d be willing to bet this piece of wisdom still rings true. The WotC designers doubtless feel that their own rules may be wrong in places, but the majority disagreed at the time, or perhaps they were one of those who agreed but later had second thoughts, but for whatever reason the rules aren’t officially amended. Designers having house rules has existed since Gygax himself holding in disdain the rushed state of the psionic material, weapon vs armor tables, and weapon speed tables and stripping the lot from his home game of AD&D – if not from earlier!
So, now that I’ve gone on a bit of a tear, here’s the short answer: they’re helpful because it shows us what RPGs are all about, and have been about since their inception.
Didn’t you say something about “it’s not an RPG without dice”? What’s up with that?
Oh, right, sorry. Gygax was indeed noted to say that diceless RPGs were not RPGs (though they were still games where some great fun could be had), and whether I agree with him or not the point is that luck through the form of rolling dice is integral to D&D if not necessarily the RPG genre as a whole. Don’t believe me? Read for yourself:
Diceless and “storytelling” games are not RPGs, but that is not to say that they are not games, nor to claim they lack high entertainment value–fun! My complaint has been that these games hould not claim to be RPGs, nor should those that tour them claim any “adult” or “sophistication” merit becasue they have no random chance.
As for PA’s calling attention to the fact that many an RPG session has little or no random chance element interjected into a play session, this is so. However these RPGs can include that when needed or desired. In a private email I called his attention to this, and the fact that the “diceless” game can not to do, as it is not an RPG, has been emasculated by the excision of random chance
It ain’t an RPG without chance entering into play
And now to add one of my own…
Is Perkins’ idea good or not and why?
The ultimate litmus test of any rule tweak in an RPG is dead simple and two-fold:
- Is this rule as simple as it can be, but no simpler? And,
- Does this rule emulate the intended effect in a way that is at least as fun as what existed previously to emulate that effect?
Going back to the beginning…
Is this a good way to randomize damage? It’s fairly good, yeah. I’d tend towards slightly bigger die sizes than a d6, but having never really tried it I’m not sure if that’s really necessary. It obviously reduces your ability to tweak probability curves, but a similar effect can be achieved by simply varying die sizes. A smaller die size means a lesser variation than a bigger one, and that’s effectively similar to creating probability curves with multiple dice of varying sizes.
Is this a good way to maintain a feel good roll? Sure! Fewer dice slightly reduces its effectiveness, since rolling more dice means a higher chance to avoid minimum damage, but at the same time max damage’s likelihood is increased as well, so a bit of bad with the good.
Is this a good way to abstract the amount of damage that a character could take or avoid from a hit that is “effective”? Sure. A tighter range of results than what a fistful of dice is capable of producing may even be more realistic – if you’re hit “effectively” shouldn’t it consistently hurt an awful lot rather than swinging all over the shop? Possibly. I’m no expert in combat that involves magic, and I don’t think anyone else is, either. It seems as good as any other way in the abstraction department, with the exception that perhaps you feel it’s “too” abstracted… but as we already know, Gygax had a certain measure of degree that he enjoyed in abstraction, and other people probably have their own. This one is down to personal taste, I think.
Lowering the amount of dice obviously is reducing the complexity, so I guess the only question there is as above: does this reduce complexity too much in your taste?
My answer? It depends. In real life there aren’t any “magic bullet” solutions. I love using ridiculous dice expressions I could never even get a grip on in a game at a real table using real little plastic knobs with numbered faces because the logistics simply wouldn’t be there. For me this solution doesn’t really offer much in way of being compelling. Clicking the “roll the dice” button will be just as fast whether it’s “/roll d20+d18+d74+24+108+1d12r3*d100” or “/roll d6+536” in my case – I really have nothing to lose by using these ridiculous fistfuls of dice and everything to gain in terms of precise number curve control. However, in the real world, at a real table, the trade-off is extremely compelling. Saving yourself some time doing math without any average numerical effect on the game? Aside from the psychological wonder of dropping 200 dollars worth of dice on the table to represent how potent an attack really is, is anything of value truly lost greater than the value gained in time saved? I would say no. This seems to me to be, at the very least, a fair trade-off that every “real table real dice” GM should at least consider.
Adopting it, of course, depends on if you “care what the book says”… 😉
The Hydra DM
Let’s face it, guys; when you’re fighting undead, sooner or later you’re going to want to run away screaming like a little girl. Well, alright, maybe “want” is the wrong word – “must” is perhaps more fitting. For a long time in D&D there hasn’t been anything approaching a robust system of fight or flight – only fight. No more! I present to you now, officially, my Spirit of the Century Chase Hack for D&D 4th edition.
What makes a chase good?
Foot chases, especially, all have some very common themes:
- They don’t last long in the game. A terror-induced sprint can only last for a short while, especially when you’re burdened by adventuring gear, armor, and weapons. Beyond this timer, generally the terrain and actions of the chaser and chasee result in one side catching or losing the other in short order.
- They shouldn’t last long at the table. A chase is fast, and it should therefore be mechanically simple so that you can keep up excitement at the table. There’s not a LOT of strategy here compared to something like a fight, but there is a lot of involvement.
- Speed helps, but it isn’t the MOST important thing. Relying on speed alone might simply make you run into the cart of cabbages the heroes flipped behind them.
- Innovation is key. A good chase scene is driven by actions besides “I run some more”, like parkouring over rooftops, throwing down caltrops, or swinging across a chasm on a rope.
So, what mechanics can we use?
As mentioned, these mechanics are lifted as closely as possible from Spirit of the Century, since they got chase scenes (admittedly for vehicles) pretty much correct from the get-go. Adapting it to the generally-on-foot nature of dungeoneers is surprisingly not very difficult. Here are the rules of the chase scene in handy bullet point format.
- Select a Trailblazer for both the PCs and their enemies (in the case of the PCs, let them choose their own). The Trailblazer of team monster will probably change over the course of the chase, but so far I haven’t had much luck with re-arranging who gets to lead on team PC (once you select a PC Trailblazer for a given chase scene you should probably just keep them unless you know the terrain is going to change drastically). Those of you shooting for irony can ask for a Pathfinder instead, but be warned that players generally react aversely to bad puns.
- The Trailblazer for the team being chased selects a primary skill. A common choice is athletics, but alternatives that also often come up are acrobatics and endurance, or sometimes even stealth. It is possible to switch primary skills between exchanges, but I haven’t seen it come up very often. Remember that skill rolls in a chase scene are predicated on good roleplaying of what the skill roll entails from a character perspective. The roleplaying drives the mechanics of the chase scene and vice versa – neither functions without the other.
- The party of the Trailblazer on the side of the PCs (the monsters do not do this, they’re not heroic enough and since they’re all played by you anyway there’s no need to enforce teamwork) can select one of their members besides the Trailblazer, who will be able to designate another skill as the secondary skill. The Trailblazer cannot be aided by the same person two exchanges in a row. They will, same as the primary skill, give a snippet of roleplaying for why this skill applies to the situation and how their character is using it.
At this point the rules diverge slightly based on whom is being chased and who is doing the chasing. If the PCs are being chased –
- The Trailblazer sets a single DC that applies to both skill rolls, then both he and the party member contributing the secondary skill roll make their skill checks. Remember, if they describe great success and the rolls come up as flubs, the opposition was just that much better and your description should match that fact. If either of these checks beats the DC then the PCs experience success, otherwise they experience failure (described below). Remember to apply any miscellaneous modifiers (described below).
- The monster Trailblazer (generally the monster with the highest bonus to the skill still in the chase since most monsters have low bonuses to most skills and will provide little challenge otherwise) attempts to roll against the DC using the primary skill. Remember this isn’t a one way street – you have to give a description, too! Again, below the DC results in some degree of failure, while above the DC results in some degree of success.
If the PCs are the ones chasing –
- The monster Trailblazer sets a DC and selects a primary skill. Again, remember that your skill check requires a description to work. Make a check against this DC, with matching or above being some degree of success, and below being some degree of failure.
- The Trailblazer and the secondary skill contributing member of the PC team will make checks, with the Trailblazer using the primary skill as designated by the monster team and the aiding PC using a secondary skill designated by himself (again, this needs a good description – don’t be afraid to say “that makes no sense”). The usual successes and failures apply.
These are some recommended values and methods for handling successes and failures. To wit I have used these in my West Marches sandbox campaign, which features generally only one or two combat encounters per session (if that), which means the penalties are a bit harsh in order to have a challenging game. If you want a longer chase scene, or you want to adjust it so that the penalties to the PCs aren’t as bad so they don’t need to take an extended rest as soon afterwards, you can easily do that by simply adjusting the number values.
- If the PCs are being chased and equal or exceed the DC they set with at least one of their two checks they take no immediate penalty and (probably, see getting caught below) continue to flee at top speed. Top speed is assumed to be five times overland speed (speed 6 characters would be running at slightly over 15 miles an hour – something that I think is suitable for heroes at a dead sprint with gear, but you can adjust this to suit your own personal preferences), and a single exchange takes one minute. If you are in a small area and are afraid the chase might go outside of the area, you can reduce the amount of time an exchange takes to a matter of seconds rather than the full game minute, or alternatively you can reduce the movement rate due to it being a confined space.
- If the PCs are being chased and both checks are below the DC, the higher check result is used. All PCs in the group lose a number of healing surges equal to the difference divided by 2 rounded down to a minimum of 1 surge. Adjusting this divisor to be higher can make chases less punishing on PCs, or lower can make them more punishing.
- If the PCs are being chased and the monsters equal or exceed the DC with their one check, PCs each lose healing surges equal to the difference divided by 2 rounded down to a minimum of 1 surge. Again, adjusting this divisor to be higher can make chases less punishing on PCs.
- If the PCs are being chased and the monsters do not equal or exceed the DC, they will take damage on their stress track (see The Stress Track below) equal to the difference divided by 4 rounded down to a minimum of 1. Adjusting this divisor can make monsters easier to catch or escape as you please, and will have much the same effect as changing the divisor on lost healing surges.
- If the PCs are chasing, and the monsters equal or exceed their own DC, the monsters will continue to flee at top speed (probably, see getting caught below) where top speed is determined the same way it is for the PCs.
- If the PCs are chasing, and they equal or exceed the DC with either or both checks, the monsters take damage to their stress track equal to the difference of the larger result and the DC divided by 4 rounded down to a minimum of 1.
- If the PCs are chasing, and both of their checks do not equal or exceed the DC, the PCs will lose healing surges equal to the difference of the larger result and the DC divided by 2 rounded down to a minimum of 1.
- If the PCs are chasing, and the monsters do not equal or exceed their own DC, the monsters will take on their stress track equal to the difference between the result and the DC divided by 4 rounded down to a minimum of 1.
The Stress Track
Monsters, unfortunately, do not REALLY have healing surges. I mean they technically have one per tier, but that’s a pretty lousy amount of surges to use as a progress bar. Therefore I have lifted the concept of the Stress Track directly from Spirit of the Century. Each 4 minions contributes one box, each standard monster contributes one box, each elite monster contributes two boxes, and each solo monster contributes five boxes. Each box is sequentially numbered left to right starting with 1. When a monster team takes damage to a stress box, that number box is filled in. If the box is already filled in, the empty box with the next highest number is filled in. When the “box” above the highest actual box on the track is “filled in” the monster team is defeated (see below). As an example, if they take damage to the 1 box, the leftmost box is filled in. If they take damage to the 1 box again, the leftmost EMPTY box (2 box) is filled in. If they take damage to the five box, but their stress track is only four boxes long, the monsters are considered defeated. Additionally, if all of the boxes are filled in the monsters are considered defeated (since each box correlates to a remaining monster, with no boxes left there should be no monsters left). If you want to be really involved mechanically you can have monsters give fewer boxes if they’re injured, but I’ve never found that to be necessary.
The PC team is considered “caught” if any of their members takes at least surge value HP damage from losing surges with no healing surges remaining (contrary to normal surge loss, losing surges in a chase will eventually result in losing surge value hit points if you have no surges to lose). Running yourself ragged like this is obviously a measure of desperation, and it is often advisable, if it looks like you will be caught, to simply end the chase as a PC. You can stop fleeing at any time, and begin to fight. Whenever you are caught or choose to stop and fight, you must complete a full round of combat before deciding to attempt to flee again. Aside from this stipulation of a necessary round of combat, the PCs may choose to flee at ANY TIME it is a PC’s turn (and so the same for monsters respectively). You may flee at zero surges remaining, but if you take surge value or more damage from losing surges with none remaining you are once again caught. Monsters are considered completely caught when their stress track overflows or is full (as described in the last section). It is recommended that for inconsequential monsters (i.e. not solo monsters, or named villains) that when their contributing stress box is filled they are overtaken and defeated in a manner you allow the PCs to describe to you, or, if the monsters are chasing rather than being chased, that they fall too far behind or otherwise give up or are taken out. If powerful monsters or named villains are overtaken it is advisable that you use personal discretion in figuring out a reasonable impairment to their combat ability, similar to how PCs would need to engage in combat with fewer or no healing surges remaining. Spending their “big punch” powers such as recharges, encounters, or dailies and having those unavailable, or else having them take automatic damage (like bloodied value as an example) or suffer some other disabling condition like weakened or dazed (until short rest) is what I’d recommend, but really it’s up to you and your best judgment for what would make the best dramatic conclusion to the chase scene in such a case.
It is highly recommended that you include some miscellaneous modifiers in your chase scenes. If the minimum speed of one group exceeds that of the other, the faster group should get a +1 to all checks for each unit of speed they are faster. A speed 7 group entirely made of elves, therefore, against a speed 5 group made of plate armored fighters would give the elves a +2 to all checks. Alternatively, you can match the average speed of the groups together instead of the minimum speeds, although I find minimum speeds to be easier. If you want speed to be more of a factor, such as in open terrain like a field, you can increase the potency of speed, but I wouldn’t put it above +5 per unit no matter what the terrain is like. Speed is important in a chase scene, but not the MOST important. Heroes escape guard dogs all the time, after all, and dogs are pretty fast compared to humans!
Other modifiers include the familiarity with terrain type, and perhaps even plusses or minuses depending on the primary skill selected and how well it suits the terrain. Miscellaneous modifiers is your way to adjust the basic framework presented here to fit whatever situation it’s placed into.
Using the Spirit of the Century modifiers for group size (2-3 = +1, 4-6 = +2, 7-9 = +3, 10-12 = +4, etc.) for the monsters seems to work well, too, since monsters are usually only trained in one or two skills and tend to have lower modifiers than the PCs at that. In the event of solo or elite monsters you could consider giving them modifiers based on number of boxes contributed to the group (so an elite counts as two for purposes of group size, or a solo as five). Whether minions grant bonuses for size based on the individual number of them or based on the amount of standards they are worth/boxes they contribute is up to you, but I generally base it on the amount of standard monsters they should be worth, same as elites/solos.
I recommend, further, that the spending of a daily power should allow the person expending it to make a check of a skill related to that power even if it doesn’t make a lot of sense in the chase scene. A good example would be a wizard who wants to use Arcane Whirlwind, his level 1 daily power, to roll an arcana check as a secondary skill check. Consider giving better modifiers, such as +1 per level of the power, for higher level expenditures. I also considered allowing encounter powers for this, but there were simply too many encounter powers versus the length of the chase so I wouldn’t recommend it unless you want your players to go hog-wild with checks that don’t make a lot of sense.
Chases using Mounts or Vehicles
These will function in an identical manner to the above framework, although the skills used will probably be different, or at least the roleplayed descriptions. Athletics can be used to adjust the main mast, acrobatics to avoid falling off the rigging as you climb to the crow’s nest to get a better view, or nature to coax your horse into jumping across a pit.
Example of Play
Eravan the Eladrin Wizard, Rhovan the Human Warlock, and Lilac Sear the Human Blackguard are adventuring together when they’re set upon by a handful (3) of Maydeath Ghouls. Knowing that they are unable to combat such terrible foes without the divine protection of a Cleric or Paladin they immediately turn and flee the cliffside temple they had been exploring.
Eravan: Oh, hell.
Lilac: Right, we run away. I’ll be Trailblazer.
(The DM doesn’t have much of a choice here since all of the ghouls are the same; he selects one at random to be the Trailblazer for the monsters).
Rhovan: I’ve got a killer Arcana check, I’m going to shoot my Flame of Phlegethos daily power at the Ghouls to help cover our retreat.
Lilac: Alright, well, since this is a caldera I’m just going to say that we run as fast as we can across the overgrown garden and back to the entrance hallway, where we can start climbing back down the ropes we left from the ascent. DC 21.
DM: Go ahead and roll, guys.
Lilac and Rhovan (simultaneously): I got a 23!
DM: Haha, alright, the Ghouls come after you with their supernatural speed hungering for your flesh! (Knowing the Ghouls are speed 6 and Lilac’s armor slows her down to speed 5, the Ghouls get a +1 bonus to their check, and since they are a group size of three they get another +1 bonus). 16! Ah, looks like the Ghouls take a hit to the (21-17=4/4=1) 1 box. (This DM is being transparent about the mechanics, you don’t need to announce which box is struck if you don’t want to). The flames of Phlegethos carve into one of the Ghouls and it’s too busy being on fire to chase any further, but the other two are right behind you! Your minute long sprint takes you across the garden and back into the entrance tunnel full of graffiti – you can see the door out to the cliffside from here. (If the Trailblazer was the one that was removed from the chase, the DM will need to select a new one).
End of First Exchange, beginning of Second Exchange
Lilac: Hm, well, we’re going to have to climb down the ropes; athletics is generally the skill used for climbing and I’m not good at much else that will be useful so I say that we’ll descend the ropes as fast as we can, hoping that the Ghouls don’t know how to climb. DC 22.
Eravan: Is there any way I could maybe use Perception to find the quickest way down? (Eravan has a very high modifier to perception and wants to try to use it in a reasonable way).
DM: No, the ropes are anchored where you left them – you’ll have to just use them as they are unless you fancy a free-climb down the cliff-face.
Eravan: Well, that is definitely not something I fancy. Hrm. Alright, I’d like to use Nature to try to figure out if Ghouls can climb after us so we’re ready to jam when we hit the bottom if they can?
DM: That seems reasonable to me. Roll ’em guys.
Lilac: 21, drat!
Eravan: 20, not really any better.
DM: Alright, you guys each lose 1 healing surge (22-21=1/2=.5, minimum 1). When you reach the bottom of the cliff, however, it seems as though the Ghouls aren’t interested in pursuing you outside of their lair (they decided to stop chasing, which they can do at any time just like the PCs). You’ve escaped… for now, at least.
I find that the above framework tends to work very well, and so do my players. It’s very fast since it only involves a small handful of rolls, it’s roleplaying rich since roleplaying is required to make a check, and unlike a skill challenge there is no contrived “X before Y” designated ending point with the heroes instead in full control of how much effort they want to devote to running or chasing. Speed is a factor, but not the most important one, with innovation (roleplaying your checks) being at the forefront. The chases don’t last too long at the table (generally only about ten minutes in my experience), and they get everyone involved due to secondary skills. They let the dumb muscle character get his chance to shine – a fighter only gets three trained skills, but chances are two of them are Athletics and Endurance, which are both perfectly suited for a chase scene and use either his primary ability score or a secondary one. What’s more, the use of secondary skills allows the PCs to not just follow the villain’s footsteps, but to employ their smarts to cut him off at the pass using a different skill. And, not only that, but the seemingly useless monster skills are finally relevant! Hooray! Finally, they don’t penalize characters who are naturally bad at running, like a wizard, since the wizard doesn’t need to be the one rolling Athletics as Trailblazer.
That’s it from me, but I’d love to hear how people use and adapt this framework to their own games when chases occur. I had to use it about a half dozen times to narrow in on the divisor values that worked for me in terms of Stress Track and Surges lost, though, so I’d recommend the first time you use the system you keep in mind that you may need to step in as the DM and say “alright, these values aren’t correct, I’m just going to narratively end this chase scene and we’ll adjust them after the session to something we like better.”
This post is part of the May of the Dead Blog Carnival. For more great content regarding Halloween in May, head over there ASAP!
May your undead have the PCs flee in terror,
The Hydra DM
So far there have been articles about power options (non-damage riders on powers) for both PCs and MM1 and MV monsters. There have even been articles on monster damage. So if we’ve taken care of status effects, and we’ve taken care of monster damage, what’s left? To-hit and defenses scale pretty much statically all the way to level 30, so what now? PC damage. If power options are important but vastly insignificant compared to to-hit, defenses, and damage, and to-hit and defenses are static, that leaves damage. We’ve already investigated what makes monster damage tick (or the lack thereof), so that leaves PC damage. How does PC damage scale by level, especially as compared to monster hit points? Read on to find out.
The first, and most obvious, problem with determining PC damage by level is there isn’t really an expected value, or if there is, it’s difficult to find. There are so many variables, as Mr. Ross mentioned in the comments section of my last article, that to tackle this problem you would need untold hours of dedicated analysis work – it would be so time-consuming that no single blogger could possibly do it.
Or could we?
Thankfully, one group of many individuals has, in fact, already done the work of determining what expected player damage looks like! They are members of the Character Optimization board, who, while I don’t generally approve of the concepts of “builds”, are pretty well versed compared the majority of people about how to create a PC that can do things – such as damage. Rather than having to re-combinate the thousands of magic items, feats, paragon paths, powers, and epic destinies that exist on the dozens of classes, I can instead simply reference their handy-dandy DPR by level threads (in this case I will be using “DPR King Candidates 3.0”).
But wait! “Stop HydraDM, you’re crazy!” you say? “You shouldn’t use optimized characters like that!” you say? Well, actually, this isn’t a problem like you would expect. Why isn’t it a problem? Because: I am not comparing player damage to anything else, I am comparing player damage to itself as it relates to monster hit points. Comparing player damage to monster damage will come later and use “expected” values rather than solely values submitted to a thread designed to be about damage-per-round.
So, before I start on the damage, first I need to determine the average suggested monster hit points for a given level. Using our handy-dandy DMG (and updated tables where appropriate), this comes out to… well, actually there is no standard amount. The problem here is that despite damage, attack, and the defenses all using crisp and easy to use formulae, hit points on monsters arestill determined by constitution score. Blegh. Come on now 4th edition, you’re letting me down here. Anyway, a quick browse of the compendium (and my previous data on the matter) shows that we are given 213 standard monsters in MV and, just to round things out, 228 standard monsters in MM3 (I’m using modern monsters because this is an analysis of modern damage – no point in including MM1 stuff here). Therefore, consider total standard monster count to equal 441. This set of monsters is composed of 85 skirmishers (19.3%), 84 soldiers (19.0%), 84 controllers (19.0%), 55 artillery monsters (12.5%), 54 lurkers (12.2%), and 79 brutes (17.9%). Ignore the fact that we’re missing .1% from rounding error here – my actual calculations use the entire number.
Using this information, we acquire some figures: 7.86 (approx) base hp for any given monster, plus 7.86 (approx) hp per level, plus constitution score. So what the heck is the constitution score going to be? “On average, the highest ability score of a [Non-AC Defense’s ability score] pair is equal to 13 + one-half the monster’s level.” So, if Constitution is the higher of our STR-CON pair that means, what, 13 hp at first, 14 at second, 14 at third, 15 at fourth etc.? Sure, why not, let’s go with that. It’s as good a guess as any, and really we only need a yardstick. Being a handful of HP off shouldn’t be too bad, assuming there’s actually a noteworthy problem to be found.
One final assumption: I am going to ignore area damage. There is no way to tell how many monsters might be hit by such attacks, and thus calculating their expected damage value is frankly difficult at best.
The following values are Damage-Per-Round (DPR) that can be continued indefinitely. I will therefore equate this damage with the Monster Single-Target Damage expression. The values posted in the practical DPR Kings thread (done by averaging submissions) are for levels 1, 6, 12, 16, 24, and 30.
Expected % of Expected Monster HP (practical) –
- Level 1: 68.4%
- Level 6: 51.9%
- Level 12: 54.4%
- Level 16: 56.1%
- Level 24: 69.2%
- Level 30: 161.8%
So, from our practical submissions, what seems to be the pattern here? Well, your given DPR Kings submission seems to do quite well at level 1 (I imagine this is because of the plethora of experience concentrated in optimizing level 1 characters), then dips down (again, probably due to less experience), and rises steadily until it reaches level 30, at which point it spikes off the charts primarily because of a few ridiculous character concepts, such as the one that expects an average DPR of 3,857 (yes you read that right) currently holding first place for level 30. This particular build, as a result, is technically speaking a statistical outlier (outside of 3 standard deviations from the mean; in this case it is at around 5), but I have included it anyway because I think despite being a statistical outlier it is also very important. The “smiley-face” nature of these numbers (high at the start, high at the end, kind of low-ish in the middle) seems to reflect that the center of the graph should be higher than it is, except that very few people use those levels as a target goal in comparison to level 1 or level 30. So, while we might be wise to ignore the outlier at level 30, we also might be wise to include it. In this case I have kept it included. A special thanks to commentor “Sentack” for pointing this oversight out to me 🙂
Something else important to note: a “fair” striker is considered one that can sustain an expected DPR/Monster HP of .4, or killing 40% of a standard monster per round, every round, indefinitely. At 20% you are considered “garbage” compared to standards of play they have established, and at 60% you are considered “optimized”. 80%-100% and above is suggested to be particularly impressive and potentially necessitating an adjustment by WotC due to its power relative to the majority of builds. As a cursory example of this in action, a Warlock at level 1 using Eldritch Blast with an ability score modifier of 4, accurate implement for +1 to hit, and attacking a cursed target will deal an average of 8.8 DPR, which translates to 27.5% of an expected monster’s HP at level 1 (roughly) – not quite “fair”.
So, yes, as expected PC scaling increases faster as the level gets higher. But what, then, does their damage look like compared to monster damage? Thankfully I happen to already have those numbers, so let’s have a look (In % opponent hit points):
Monster Single Target (Normal): 34.3%
Monster Single Target (Brute): 42.9%
PC Single Target (“Fair”): 40%
PC Single Target (“Optimized”): 60%
PC Single Target (Practical): 68.4%
Monster Single Target (Normal): 27.6%
Monster Single Target (Brute): 34.5%
PC Single Target (“Fair”): 40%
PC Single Target (“Optimized”): 60%
PC Single Target (Practical): 51.9%
Monster Single Target (Normal): 24.7%
Monster Single Target (Brute): 30.1%
PC Single Target (“Fair”): 40%
PC Single Target (“Optimized”): 60%
PC Single Target (Practical): 54.4%
Monster Single Target (Normal): 23.6%
Monster Single Target (Brute): 29.5%
PC Single Target (“Fair”): 40%
PC Single Target (“Optimized”): 60%
PC Single Target (Practical): 56.1%
Monster Single Target (Normal): 21.9%
Monster Single Target (Brute): 27.3%
PC Single Target (“Fair”): 40%
PC Single Target (“Optimized”): 60%
PC Single Target (Practical): 69.2%
Monster Single Target (Normal): 21.4%
Monster Single Target (Brute): 26.8%
PC Single Target (“Fair”): 40%
PC Single Target (“Optimized”): 60%
PC Single Target (Practical): 161.8%
In chart format, that looks a little something like this (with high single target brute limited damage tacked on for scale; it says maximum, but it’s the average of that maximum set of dice and modifiers) –
Eugh. That looks kind of gnarly, doesn’t it?
Tune in next time for the capstone article of the series, which will combine expected damage and expected power options into the ultimate graph of monster versus PC (by-the-book) power.
A wonderful post by Casey Ross has investigated one of the issues that my own two part study glossed over. While I was engaged in monster power options, certain that the power of these effects relative to the PCs would determine what was behind the epic tier power gap that so many notable community members continually point to I actually learned that the case seems to be the opposite; all of the power word: stuns on earth will not make up for higher defense scores, damage expressions, and to-hit bonuses. Along this vein the aforementioned study by Casey Ross did a very simple trick – by considering level 1 damage to be ideal (it often feels that way anecdotally) and figuring out how the official damage expressions scaled relative to the appropriate defense (hit points) as the levels increased he was able to find a big gap forming by even mid heroic tier. Monster attack and defense kept pace with the PCs all the way up to the final bell, but damage definitely didn’t. By level 30 the medium damage expression had fallen to ~50% of the predicted scaling level, and the bigger spike damage expressions fared even worse.
This article is an expansion on this idea, but in this case I’m going to work backwards. I’m going to take the average HP of a level 30 PC (with a few assumptions) and compare that to the damage of level 30 monsters (suggested) and express, in level 1 terms, what damage would be like. This isn’t really a practical exercise, but it will hopefully provide a clearer illustration of the differences. Almost all 4th edition players are familiar with level 1 and the sorts of numbers going on there, but very few are intimately familiar with epic tier in comparison (Mike Shea of Sly Flourish being a notable exception – I noted him snapping up Ross’s article with glee on Twitter and after he wrote a whole handbook on epic tier I can probably guess why) and hopefully this will show exactly what is happening in epic tier in a way that everybody can relate to.
Without further ado, time to get this underway. The first step was determining PC hit points. The core assumptions here were based on some quick data from the character builder tallying up hit point averages of the four roles, then assuming the party had one of each and averaging those together. The results were an average base hp of 12.75 and an average per level hp of 5.20. These values are obviously theoretical as nobody can have partial hit points per level. For our +Constitution score hit points I assumed an average value of 12 as a result of an expected constitution score of 10 for a controller, 12 for a leader, 12 for a striker, and 14 for a defender. Obviously these values are completely pulled out of thin air but being one or two hp off in either direction shouldn’t harm our results too badly. With a starting HP of 24.75, then (not too far from the suggestion of 25 by Mr. Ross), and a per level hp of 5.20, our heroic theoretical PC will reach level 30 (assuming no ability score buffs to constitution, which is a reasonable bet in my opinion) with 177.55 hit points (the additional two are from 11th and 21st level, when all ability scores are automatically increased by 1). For the purposes of monster damage I will be ignoring hit chance (which is supposed to remain fairly static, and in practice does, or so I’m told) and using only average damage in the first chart. When adjusting for brutes or limited damage (where it is urged to increase by 25% or 50%) I will be increasing the average damage. This does not always pan out in practice compared to other ways to increase the value by 25 or 50 percent, and often the max damage, min damage, or average damage will not scale quite properly depending on which method you use – them’s the breaks, I guess, and since this way is easiest that’s what I’ll be using.
The findings, unsurprisingly, match up well with what Ross had, and the result is the same as if I used his data (I like to be thorough, so sue me) – level 1 damage, if it was as effective against PCs as level 30 damage on a numbers-based-scale, would on average be reduced ~61%. This is what level 1 damage would look like if this was the case converted to dice as best I can (lowest die assumed available is 1d4). Keep in mind that the book recommends increasing limited damage 25-50% – I accounted for both possibilities, as well as the possibility of doing this increase to an already-increased brute damage discretely so that were you to want to make a monster using the “appropriately” downgraded value it would be very easy (though I did not model limited damage on a minion – that comes up rather rarely).
- Minion: 2
- Brute Minion: 3
- Multi-Target Standard: 1d4+2
- Single-Target Standard: 1d4+3
- Brute Multi-Target: 1d4+3
- Brute Single Target: 1d4+4
- Limited Multi-Target (Low): 1d4+3
- Limited Single-Target (Low): 1d6+3
- Limited Multi-Target (High): 1d6+3
- Limited Single-Target (High): 2d6+1
- Limited Brute Multi-Target (Low): 1d6+3
- Limited Brute Single-Target (Low): 2d6+1
- Limited Brute Multi-Target (High): 2d6+1
- Limited Brute Single-Target (High): 2d6+4
The almighty brute monster will use its daily power to deal!… 2d6+4 damage to one creature? Lame. A level 1 fighter with a mordenkrad can melee basic harder than that.
This is the scale of the problem facing level 30 monster damage, hopefully it’s clearer for you all as much as it’s clearer to me; my experience is primarily in the heroic tier and I’d wager most people are the same way. Imagine monsters rolling 1d4+3 as their normal attack, or a minion that deals 2 damage, to level 1 heroes and this is the way that monster damage stacks up to hero hit points in epic tier: pathetically.
Combined with my past findings that damage, to-hit, and defenses seem to trump all of the power options by a wide margin is it any wonder that epic tier creatures get stomped so badly? The heroes don’t only outweigh them in quantity of options (which you can read more about in this article of mine: Exploring Complexity), and quantity of power options, both of which are interesting and somewhat potent things to have on your side, but also in damage. By a lot. Like a lot a lot. And, as we all know, damage is the only guaranteed win condition for a fight in 4th edition – no matter how many times you stun somebody, they’re just waiting to make that saving throw and the battle’s back on. You can save vs. stunned, you can’t save vs. dead. And damage, well, damage makes you dead.
As for what to do with this information, that’s a good question. Should level 30 be easier for the heroes or harder in terms of the damage the monsters dish out? Should it scale the damage all the way to level 30 the same as level 1 and then rely on things like immediate and opportunity action options and power options to make fights truly epic (tier)? Would that mean reducing the amount of power options in heroic and paragon tier to create that contrast in an acceptable depth? What do you guys think? Leave a comment below with your thoughts and get this conversation started.
This is post is going to be short. It’s going to be hard-hitting. It’s probably only going to be read by an amount of people I could count on one or two hands. This post is about proficiency bonuses, and how they’re killing
The leading experts on character optimization (if you think they’re full of it you can tell them so but I warn you they’re rather testy) seem convinced that, in terms of their “Gold: Why haven’t you taken this yet? A defining choice for a build, or even the whole class.” rating system, the Weapon Expertise feats (effectively a +1 to-hit, plus some slight other bonuses) are worth that gold rating. Setting aside that “+1 to hit” should not be a defining choice for a class (*cough get to work Wizards cough cough*), the experts here seem to believe it’s rather important, among the very most important things, in fact. So, what then if I told you there were feats that granted you +2 or even +3 to hit with certain weapons without any other conditional statements? What would those be rated? Gold+? Platinum? (Astral) Diamond? In fact, these feats exist: the weapon proficiency feats.
Now that I have established the grave threat (clearly!) of weapon proficiency feats, why do we care as DMs? Shouldn’t the players be allowed to pick a weapon to specialize in? Well, we need to care because magic weapons only come in two categories: “useful”, and “for sale, 80% off”. But surely a magic weapon would be useful, would it not? Then consider the following: a level 1 player with a +4 ability score bonus to hit wielding a +3 proficiency weapon with a 1d8 damage die, and that same player wielding a +1 magic equivalent he is not proficient in. The theoretical damage per round may be higher for the magic weapon off the bat, but the problem is that damage relies on you feeling like a rockstar 5% of the time and like a baked potato the other 95% of the time (assuming critical hits exist, expected damage per round for a basic attack is going to be 5.28 and 6.8 respectively; assuming critical hits don’t exist, expected damage per round changes to 5.1 versus 4.75). Not only is your damage actually markedly worse for 95% of the attacks you make (well, okay, ~45% since you miss the other 50% of the time and it’s hard to do worse than 0 damage), but you don’t get to apply any on-hit effects that extra 10 or 15 percentile points of the pie you’re missing from the lack of a proficiency bonus. And, as The Id DM has shown in great depth, almost all player character attacks have some kind of rider attached, and a great many of those are on a successful hit.
This leads to a very, very obvious problem: a player would often rather use a mundane weapon with which they are proficient than a magic weapon with which they are not despite the magic weapon costing an order of magnitude (or multiple orders of magnitude) more.
DMs often want to include magic weapons of a particular variety; for instance, DMs may wish to include a great Dwarven mordenkrad, powerful and mighty! Used to slay giants and other foul beasts! It truly rings of heroism. Just enough heroism to be marked 80% off and put up for auction because nobody is proficient with mordenkrads.
Basically, what I’m saying here is this: it’d be nice if instead of +2 and +3 bonuses, proficiency was +1 and +2. It’s great to be a specialist in a certain kind of weapon, it makes you feel unique, but really there’s no reason for weapon proficiency to be such a comparatively large bonus. Nobody should want to pawn Excalibur because “it makes me hit too infrequently, I’m not proficient”. That’s just lame.
Until Next Time,
The Hydra DM
And now for the conclusion to my two part series exploring the power options of monsters in D&D 4th edition. This article will focus on the contrast between later monster creations (monster vault) and earlier monster creations (MM1) as well as how monsters contrast with players. For those not in the know, you can read the last article I wrote here, and discover the original premise behind power options with The Id DM’s article here. Much like one probably wouldn’t get Space Balls without first having seen Star Wars, you likely won’t understand this article without reading at the very least the originator of power options as a codified concept, and preferably my last piece on the matter.
I’m going to endeavor to keep this article more to the point than the last one, as shorter tends to be sweeter.
Before I start, I’d like to bring up a few changes I’ve made in the power option process. I have added categories for Insubstantial and Invisible (since they were prevalent last time) as well as a new category called “Free Attacks”, which is exactly what it sounds like: attacks that are granted by the power, not unlike commander’s strike or direct the strike do for a warlord. Finally, because Monster Vault was extremely skimpy on epic tier monsters I had to pull those from Monster Manual 3 (which, again, is still considered a “well made” Monster Manual that was created after the math changes. I would’ve used the even more recent Dark Sun Creature Catalog, but it, too, was fairly skimpy on epic tier material, so MM3 it was).
Monster Class Definitions
To begin, I’d like to pick up where I left off last time: monster class associations with certain power options. The classes seem to have a few, minor, differences since we last saw them, with Controller being much the same (it can do anything), Lurker being reduced in its Blinding, Unconscious, and Bonus categories, the Artillery monster remaining basically the same (although having lost its emphasis on Dazed and Dominated), Soldier losing Grabbed and gaining Immobilized and Stunned, Skirmisher being just plain Movement, and finally the Brute is still pretty much just Prone. All in all the monsters are pretty similar to what they were before, but Controller seems even more all-encompassing, Skirmishers focused much more just on movement, and Soldiers getting a bit of a buff in terms of their inflictable conditions. In other words, the monster classes are, on the whole, pretty much identical to what they were two years ago.
The Most Glaring Finding that Started it All
What is it? Well, on Twitter a few weeks ago I sent Mike Shea (of Sly Flourish) a message in response to something he put out there: that monster power increases ~linearly while player power increases ~exponentially. Well, as I had since learned, using the Power Option AEDU structure PC power increases much more closely to logarithmically than exponentially (in large part due to power replacement in paragon tier and up as well as not counting paragon paths and epic destinies, nor magic item powers, assuming again only the AED part of that without the U nor paragon paths, feats, and epic destinies). Of course, even if you don’t allow the PCs to take paragon paths and epic destinies, nor to have magic items with powers, it’s still a pretty brutal curve rapidly accelerating as they approach Paragon tier, then continuing to pull up slowly as they head towards level 30. The monster graph, meanwhile, was not so forgiving. The graphs of total power options available to your average PC versus your average monster looked something like this –
The regression lines are 3rd order polynomial, mostly because I liked how they were fairly smoothed out not for any real statistical reason. As you can see, monsters as of Monster Manual 1 do indeed increase approximately linearly (it’s a bit of an S-curve, actually, but on the whole it’s fairly flat). Meanwhile players shoot up, up and away. While the monsters gain only ~1 power option each by paragon tier, the PCs have gained around five. By level 30 the PCs have gained around 7 while the monsters hover at a mighty 2. Of course, since we all know Monster Vault monsters are much better designed, we should expect to see MV/MM3 monsters do much better, right? Well, about that…
As it turns out… Monster Vault and MM3 didn’t seem to do a lot for our Standard AEDU Power Option structure. In fact it looks like they even gimped epic tier! That certainly doesn’t fit expectations, does it? How could bigger numbers make up that much of the design? So, in an effort to resolve this inconsistency, I had to delve deeper and expand the Standard AEDU Power Option structure to include also Minor and Triggered actions. One of the big complaints, after all, with early MM1 monster designs is the lack of action efficiency, especially on Solo monsters. With new things to do using their minor and out-of-turn actions perhaps our new MV monsters are better designed after all?
As a result, I took to gathering some facts and figures about the amount of minor and triggered actions per monster; lo, and behold, the results were as expected! This is to say nothing of traits and auras, of course (and classifying those is, frankly, nearly impossible under the Power Option system since they rarely fit into a single category neatly like powers do). With that in mind, I created an adjusted graph that displays cumulative power options between MM1 and MV/MM3 monsters using minor actions and triggered actions. The results are, frankly, much more what one would expect –
– but unsurprisingly is not entirely “enough” to really make sense. Even if we assume auras cancels out both feats and utility powers, and we don’t include paragon paths and epic destinies in this analysis, all of which are very hefty assumptions, the cream of the crop MV/MM3 monsters only have as many options as a level 1 PC at level 17. Monsters get as complicated as they can possibly be at level 29, but this is still only equivalent to a level 5 PC. I mean, yeah, okay, monsters can be minions, but really?
Speaking of minions, that was actually the next thing I investigated. To wit, MM1 has 38 minion monster types in it, which were not included in this study. Meanwhile, in MV/MM3 where they were included, there were 44 minion monsters. Unsurprisingly, minions resulted in a net decrease of minor and triggered actions, although triggered actions were within 4 percentile points of average (minor was significantly lower, at roughly .1 per monster compared to the .4 per monster average). This means that, realistically, the MV graph should probably be adjusted upwards by a bit, but not significantly more (less than 1 power option/monster average).
Now, all of that is great and all, but where’s the REAL comparison graph of power options? Where’s the feats and paragon paths and epic destinies and utility powers? That’s a good question, and unfortunately one I’m not going to be able to answer, at least in whole. Anyone out there who wants to join the Power Options train and hook us up with the data on feats, utility powers, and magic items for the Fighter, Rogue, Cleric, and Druid you are more than welcome to pick up where I and, my predecessor, TheIdDM left off. But, what I can do is to use a single example paragon path and epic destiny, and slap those on. I’ll pick two of the most popular paragon paths and epic destinies: Kulkor Arms Master and Demigod. They might not have a lot to offer a Druid, but they’re pretty typical of a strong set of paragon and epic tier options.
To begin, Kulkor Arms Master at level 11 offers its first benefit: any enemy you hit that grants you Combat Advantage subsequently grants EVERYONE Combat Advantage until the start of your next turn. I am going to assume that you will always be able to get combat advantage to trigger this benefit, as by level 11 if you can’t have somebody daze the monster or flank it or something you’re doing something wrong. This increases the power options of each and every AED power by 1 because it can now grant combat advantage. The other two always-on benefits don’t actually grant any power options, so that’s easy on me. Finally, the power at level 11 grants a mark (+1 power option) and the level 20 daily power grants prone (+1 power option).
Demigod, thankfully, doesn’t actually grant any Power Options. Just straight numbers buffs. This was hard enough already, whew. So, now let’s look at the
Holy SMOKES! What HAPPENED? Kulkor Arms Master adding Combat Advantage to every At-Will, Encounter, and Daily power is what happened. These are the sorts of options players can take, and regularly do, and it’s no wonder people feel that monsters just aren’t up to snuff.
But, then, why can a squad of monsters beat up on a squad of PCs… ever? Why can’t level 5 PCs, who have access to just as many power options in their inventory as level 30 super monsters, take the heat? Sadly, the answer is pretty obvious: their numbers aren’t big enough. They can’t hit target defenses, and they can’t take the damage the monsters dish out in return.
This illustrates just how powerful damage is in 4th edition combat, to change gears, and therefore how important the role of “striker” is to trivializing encounters. You’ve doubtless heard the stories about the mid-paragon parties who dish out 1200 damage on the first turn and annihilate any boss monster in their way? I didn’t expect to end my analysis here, but I have – the ultimate conclusion of this analysis of power options in PCs and monsters, as far as I can tell, is that the amount and potency of power options are, at the end of the day, completely eclipsed by damage, hit points, and defense scores – even excluding the reasoning that most of your power options require you to hit the target first before they take effect. Is it any wonder that magic items of armor, neck slot, and weapon/implement are so highly vaunted? Is it any wonder that the static +to hit feats are so popular? You can have two or three times the Power Options at your disposal and still get turned into creamed corn by a squad of monsters at level + 6.
If you’re familiar with me on other venues for my thoughts, you might be familiar with me speaking to the potency of the condition ladder from Star Wars Saga Edition. It, in essence, takes the place of every one of these effects except for Prone. By virtue of the debuffs being triggered by a damage threshold in addition to things like stun weapons or Force powers, even your regular old soldier can inflict status conditions pretty regularly in that system, and at the bottom of the ladder, when you’ve had too many conditions piled on top of you, you wind up unconscious. Even if you have health left, you just fall unconscious for an extended period of time and the encounter is over. On the other hand, in 4th edition, even if a monster is Dazed, Stunned, Weakened, Unconscious, Removed, and Prone it’s only going to be gone until it makes its saving throws – a couple rounds, tops. The fact of the matter is Power Options don’t win fights in 4th edition, damage does, and that brings us to the questions, some things that I think would be interesting to muse about:
1) Should Power Options contribute to winning the fight via permanent unconscious the same way damage does in 4th edition or should they remain as tasty, supplemental, and totally unnecessary additional conditions?
2) Should monsters be more complex than they are in terms of Power Options or weaker?
3) Do Power Options contribute to the “feel” of Paragon and Epic tiers significantly enough to withstand the sort of Power Option bloat that exists by the end of Heroic tier and onward? Do you think it would be possible to still feel like a Paragon or Epic PC if your attacks didn’t lay on Power Options like thick whipped cream on hot cocoa?
Turns out I promised to turn out part 2 of my analysis of the Power Options available to, and the other statistics of, the monsters in D&D sometime last week, huh?
So… where is it? What gives Hydra DM, I thought you were cool!
Well, turns out that I spent a good chunk of time this week battling a system crippling string of invasive malware! Yay fun! While my computer remains operational, it unfortunately experienced some issues when it did things like, say, restart itself for no good reason. Such as losing all of my excel spreadsheet data that I was working on.
Ergo, the second part of that article may take up to another week’s worth of work to get out there. Sorry everybody!
The Hydra DM