How to deal with fear of taking dependenciesWhen should new C projects target very old C standards (>20 years old, i.e. C89)?Using third-party libraries - always use a wrapper?Is the Entity Framework appropriate when all you do is insert records in bulk?What document should describe usage of a third party library in a project?How does one keep argument counts low and still keep third party dependencies separate?Formal justification for use of third-party librariesHow to decide what library to use?Licensing, how to?Third party dependencies managementHow to have multiple source copies of a dependency in a C# git project?Bringing a large, complex legacy project under git control

I’ve officially counted to infinity!

Unexpected email from Yorkshire Bank

Why are there synthetic chemicals in our bodies? Where do they come from?

Is it the same airport YUL and YMQ in Canada?

Is Jon Snow immune to dragonfire?

How to reply this mail from potential PhD professor?

Applying a function to a nested list

GPU memory requirements of a model

Does hiding behind 5-ft-wide cover give full cover?

Is thermodynamics only applicable to systems in equilibrium?

Hang 20lb projector screen on Hardieplank

Why is Arya visibly scared in the library in S8E3?

Why is Thanos so tough at the beginning of "Avengers: Endgame"?

How can I fairly adjudicate the effects of height differences on ranged attacks?

How to creep the reader out with what seems like a normal person?

Survey Confirmation - Emphasize the question or the answer?

My ID is expired, can I fly to the Bahamas with my passport

How do you center multiple equations that have multiple steps?

If 1. e4 c6 is considered as a sound defense for black, why is 1. c3 so rare?

Accidentally deleted the "/usr/share" folder

Why was Germany not as successful as other Europeans in establishing overseas colonies?

Pressure to defend the relevance of one's area of mathematics

What is the most remote airport from the center of the city it supposedly serves?

Point of the the Dothraki's attack in GoT S8E3?



How to deal with fear of taking dependencies


When should new C projects target very old C standards (>20 years old, i.e. C89)?Using third-party libraries - always use a wrapper?Is the Entity Framework appropriate when all you do is insert records in bulk?What document should describe usage of a third party library in a project?How does one keep argument counts low and still keep third party dependencies separate?Formal justification for use of third-party librariesHow to decide what library to use?Licensing, how to?Third party dependencies managementHow to have multiple source copies of a dependency in a C# git project?Bringing a large, complex legacy project under git control






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








54















The team I'm in creates components that can be used by the company's partners to integrate with our platform.



As such, I agree we should take extreme care when introducing (third-party) dependencies. Currently we have no third-party dependencies and we have to stay on the lowest API level of the framework.



Some examples:



  • We are forced to stay on the lowest API level of the framework (.NET Standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.

  • We have implemented our own components for (de)serializing JSON and are in the process of doing the same for JWT. This is available on a higher level of the framework API.

  • We have implemented a wrapper around the HTTP framework of the standard library, because we don't want to take a dependency on the HTTP implementation of the standard library.

  • All of the code for mapping to/from XML is written "by hand", again for the same reason.

I feel we are taking it too far. I'm wondering how to deal with this since this I think this greatly impacts our velocity.










share|improve this question



















  • 20





    Is there a justification for this (e.g., external requirement) or is it being done out of ignorance?

    – Blrfl
    Apr 8 at 11:22






  • 6





    Do an experiment with some small part of codebase, create an isolation layer that doesn't try to be a generic library, but defines an abstract interface that models your needs; then put both your own implementation and a 3rd party dependency behind it, and compare how the two versions work/perform. Weigh out the pros and cons, assess how easy (or how hard) it would be to swap implementations, then make a decision. In short, test things out in a relatively low-risk way, see what happens, then decide.

    – Filip Milovanović
    Apr 8 at 11:57







  • 73





    "Currently we have no third-party dependencies" This always makes me laugh when people claim this. Of course you do. You've not written your own compiler, IDE, implementation of any standard libraries. You've not written any of the shard objects libs that you use indirectly (or directly). When you realise how much 3rd party software and libraries that you depend on, you can drop the "dependencies are bad" idea, and just enjoy not re-inventing the wheel. I would just flag the dependencies that you have, and then ask why they're acceptable, but json parsing isn't.

    – UKMonkey
    Apr 8 at 13:00







  • 8





    That said there is the alternative draw backs, like never finishing projects. But it does promote software jobs and employment :)

    – marshal craft
    Apr 8 at 17:13






  • 5





    You are right that you're wasting effort by re-inventing commodity software. You are wrong in that this is nowhere even close to "avoiding all dependencies". The Excel team at Microsoft once wrote their own C compiler to avoid taking a dependency on the C team at Microsoft. You are taking enormous dependencies on operating systems, high-level frameworks, and so on.

    – Eric Lippert
    Apr 9 at 16:49


















54















The team I'm in creates components that can be used by the company's partners to integrate with our platform.



As such, I agree we should take extreme care when introducing (third-party) dependencies. Currently we have no third-party dependencies and we have to stay on the lowest API level of the framework.



Some examples:



  • We are forced to stay on the lowest API level of the framework (.NET Standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.

  • We have implemented our own components for (de)serializing JSON and are in the process of doing the same for JWT. This is available on a higher level of the framework API.

  • We have implemented a wrapper around the HTTP framework of the standard library, because we don't want to take a dependency on the HTTP implementation of the standard library.

  • All of the code for mapping to/from XML is written "by hand", again for the same reason.

I feel we are taking it too far. I'm wondering how to deal with this since this I think this greatly impacts our velocity.










share|improve this question



















  • 20





    Is there a justification for this (e.g., external requirement) or is it being done out of ignorance?

    – Blrfl
    Apr 8 at 11:22






  • 6





    Do an experiment with some small part of codebase, create an isolation layer that doesn't try to be a generic library, but defines an abstract interface that models your needs; then put both your own implementation and a 3rd party dependency behind it, and compare how the two versions work/perform. Weigh out the pros and cons, assess how easy (or how hard) it would be to swap implementations, then make a decision. In short, test things out in a relatively low-risk way, see what happens, then decide.

    – Filip Milovanović
    Apr 8 at 11:57







  • 73





    "Currently we have no third-party dependencies" This always makes me laugh when people claim this. Of course you do. You've not written your own compiler, IDE, implementation of any standard libraries. You've not written any of the shard objects libs that you use indirectly (or directly). When you realise how much 3rd party software and libraries that you depend on, you can drop the "dependencies are bad" idea, and just enjoy not re-inventing the wheel. I would just flag the dependencies that you have, and then ask why they're acceptable, but json parsing isn't.

    – UKMonkey
    Apr 8 at 13:00







  • 8





    That said there is the alternative draw backs, like never finishing projects. But it does promote software jobs and employment :)

    – marshal craft
    Apr 8 at 17:13






  • 5





    You are right that you're wasting effort by re-inventing commodity software. You are wrong in that this is nowhere even close to "avoiding all dependencies". The Excel team at Microsoft once wrote their own C compiler to avoid taking a dependency on the C team at Microsoft. You are taking enormous dependencies on operating systems, high-level frameworks, and so on.

    – Eric Lippert
    Apr 9 at 16:49














54












54








54


8






The team I'm in creates components that can be used by the company's partners to integrate with our platform.



As such, I agree we should take extreme care when introducing (third-party) dependencies. Currently we have no third-party dependencies and we have to stay on the lowest API level of the framework.



Some examples:



  • We are forced to stay on the lowest API level of the framework (.NET Standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.

  • We have implemented our own components for (de)serializing JSON and are in the process of doing the same for JWT. This is available on a higher level of the framework API.

  • We have implemented a wrapper around the HTTP framework of the standard library, because we don't want to take a dependency on the HTTP implementation of the standard library.

  • All of the code for mapping to/from XML is written "by hand", again for the same reason.

I feel we are taking it too far. I'm wondering how to deal with this since this I think this greatly impacts our velocity.










share|improve this question
















The team I'm in creates components that can be used by the company's partners to integrate with our platform.



As such, I agree we should take extreme care when introducing (third-party) dependencies. Currently we have no third-party dependencies and we have to stay on the lowest API level of the framework.



Some examples:



  • We are forced to stay on the lowest API level of the framework (.NET Standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.

  • We have implemented our own components for (de)serializing JSON and are in the process of doing the same for JWT. This is available on a higher level of the framework API.

  • We have implemented a wrapper around the HTTP framework of the standard library, because we don't want to take a dependency on the HTTP implementation of the standard library.

  • All of the code for mapping to/from XML is written "by hand", again for the same reason.

I feel we are taking it too far. I'm wondering how to deal with this since this I think this greatly impacts our velocity.







architecture .net dependencies third-party-libraries code-ownership






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 9 at 13:43







Bertus

















asked Apr 8 at 11:01









BertusBertus

374129




374129







  • 20





    Is there a justification for this (e.g., external requirement) or is it being done out of ignorance?

    – Blrfl
    Apr 8 at 11:22






  • 6





    Do an experiment with some small part of codebase, create an isolation layer that doesn't try to be a generic library, but defines an abstract interface that models your needs; then put both your own implementation and a 3rd party dependency behind it, and compare how the two versions work/perform. Weigh out the pros and cons, assess how easy (or how hard) it would be to swap implementations, then make a decision. In short, test things out in a relatively low-risk way, see what happens, then decide.

    – Filip Milovanović
    Apr 8 at 11:57







  • 73





    "Currently we have no third-party dependencies" This always makes me laugh when people claim this. Of course you do. You've not written your own compiler, IDE, implementation of any standard libraries. You've not written any of the shard objects libs that you use indirectly (or directly). When you realise how much 3rd party software and libraries that you depend on, you can drop the "dependencies are bad" idea, and just enjoy not re-inventing the wheel. I would just flag the dependencies that you have, and then ask why they're acceptable, but json parsing isn't.

    – UKMonkey
    Apr 8 at 13:00







  • 8





    That said there is the alternative draw backs, like never finishing projects. But it does promote software jobs and employment :)

    – marshal craft
    Apr 8 at 17:13






  • 5





    You are right that you're wasting effort by re-inventing commodity software. You are wrong in that this is nowhere even close to "avoiding all dependencies". The Excel team at Microsoft once wrote their own C compiler to avoid taking a dependency on the C team at Microsoft. You are taking enormous dependencies on operating systems, high-level frameworks, and so on.

    – Eric Lippert
    Apr 9 at 16:49













  • 20





    Is there a justification for this (e.g., external requirement) or is it being done out of ignorance?

    – Blrfl
    Apr 8 at 11:22






  • 6





    Do an experiment with some small part of codebase, create an isolation layer that doesn't try to be a generic library, but defines an abstract interface that models your needs; then put both your own implementation and a 3rd party dependency behind it, and compare how the two versions work/perform. Weigh out the pros and cons, assess how easy (or how hard) it would be to swap implementations, then make a decision. In short, test things out in a relatively low-risk way, see what happens, then decide.

    – Filip Milovanović
    Apr 8 at 11:57







  • 73





    "Currently we have no third-party dependencies" This always makes me laugh when people claim this. Of course you do. You've not written your own compiler, IDE, implementation of any standard libraries. You've not written any of the shard objects libs that you use indirectly (or directly). When you realise how much 3rd party software and libraries that you depend on, you can drop the "dependencies are bad" idea, and just enjoy not re-inventing the wheel. I would just flag the dependencies that you have, and then ask why they're acceptable, but json parsing isn't.

    – UKMonkey
    Apr 8 at 13:00







  • 8





    That said there is the alternative draw backs, like never finishing projects. But it does promote software jobs and employment :)

    – marshal craft
    Apr 8 at 17:13






  • 5





    You are right that you're wasting effort by re-inventing commodity software. You are wrong in that this is nowhere even close to "avoiding all dependencies". The Excel team at Microsoft once wrote their own C compiler to avoid taking a dependency on the C team at Microsoft. You are taking enormous dependencies on operating systems, high-level frameworks, and so on.

    – Eric Lippert
    Apr 9 at 16:49








20




20





Is there a justification for this (e.g., external requirement) or is it being done out of ignorance?

– Blrfl
Apr 8 at 11:22





Is there a justification for this (e.g., external requirement) or is it being done out of ignorance?

– Blrfl
Apr 8 at 11:22




6




6





Do an experiment with some small part of codebase, create an isolation layer that doesn't try to be a generic library, but defines an abstract interface that models your needs; then put both your own implementation and a 3rd party dependency behind it, and compare how the two versions work/perform. Weigh out the pros and cons, assess how easy (or how hard) it would be to swap implementations, then make a decision. In short, test things out in a relatively low-risk way, see what happens, then decide.

– Filip Milovanović
Apr 8 at 11:57






Do an experiment with some small part of codebase, create an isolation layer that doesn't try to be a generic library, but defines an abstract interface that models your needs; then put both your own implementation and a 3rd party dependency behind it, and compare how the two versions work/perform. Weigh out the pros and cons, assess how easy (or how hard) it would be to swap implementations, then make a decision. In short, test things out in a relatively low-risk way, see what happens, then decide.

– Filip Milovanović
Apr 8 at 11:57





73




73





"Currently we have no third-party dependencies" This always makes me laugh when people claim this. Of course you do. You've not written your own compiler, IDE, implementation of any standard libraries. You've not written any of the shard objects libs that you use indirectly (or directly). When you realise how much 3rd party software and libraries that you depend on, you can drop the "dependencies are bad" idea, and just enjoy not re-inventing the wheel. I would just flag the dependencies that you have, and then ask why they're acceptable, but json parsing isn't.

– UKMonkey
Apr 8 at 13:00






"Currently we have no third-party dependencies" This always makes me laugh when people claim this. Of course you do. You've not written your own compiler, IDE, implementation of any standard libraries. You've not written any of the shard objects libs that you use indirectly (or directly). When you realise how much 3rd party software and libraries that you depend on, you can drop the "dependencies are bad" idea, and just enjoy not re-inventing the wheel. I would just flag the dependencies that you have, and then ask why they're acceptable, but json parsing isn't.

– UKMonkey
Apr 8 at 13:00





8




8





That said there is the alternative draw backs, like never finishing projects. But it does promote software jobs and employment :)

– marshal craft
Apr 8 at 17:13





That said there is the alternative draw backs, like never finishing projects. But it does promote software jobs and employment :)

– marshal craft
Apr 8 at 17:13




5




5





You are right that you're wasting effort by re-inventing commodity software. You are wrong in that this is nowhere even close to "avoiding all dependencies". The Excel team at Microsoft once wrote their own C compiler to avoid taking a dependency on the C team at Microsoft. You are taking enormous dependencies on operating systems, high-level frameworks, and so on.

– Eric Lippert
Apr 9 at 16:49






You are right that you're wasting effort by re-inventing commodity software. You are wrong in that this is nowhere even close to "avoiding all dependencies". The Excel team at Microsoft once wrote their own C compiler to avoid taking a dependency on the C team at Microsoft. You are taking enormous dependencies on operating systems, high-level frameworks, and so on.

– Eric Lippert
Apr 9 at 16:49











6 Answers
6






active

oldest

votes


















93















... We are forced to stay on the lowest API level of the framework (.NET Standard) …




This to me highlights the fact that, not only are you potentially restricting yourselves too much, you may also be heading for a nasty fall with your approach.



.NET Standard is not, and never will be "the lowest API level of the framework". The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight.



Depending on which version of .NET Standard you are targeting, you can end up with a very rich set of APIs that are compatible with .NET Framework, .NET Core, Mono, and Xamarin. And there are many third-party libraries that are .NET Standard compatible that will therefore work on all these platforms.



Then there is .NET Standard 2.1, likely to be released in the Autumn of 2019. It will be supported by .NET Core, Mono and Xamarin. It will not be supported by any version of the .NET Framework, at least for the foreseeable future, and quite likely always. So in the near future, far from being "the lowest API level of the framework", .NET Standard will supersede the framework and have APIs that aren't supported by the latter.



So be very careful with "The reasoning behind this is that a new platform could one day arrive that only supports that very low API level" as it's quite likely that new platforms will in fact support a higher level API than the old framework does.



Then there's the issue of third-party libraries. JSON.NET for example is compatible with .NET Standard. Any library compatible with .NET Standard is guaranteed - API-wise - to work with all .NET implementations that are compatible with that version of .NET Standard. So you achieve no additional compatibility by not using it and creating your JSON library. You simply create more work for yourselves and incur unnecessary costs for your company.



So yes, you definitely are taking this too far in my view.






share|improve this answer




















  • 16





    "You simply create more work for yourselves and incur unnecessary costs for your company." - and security liabilities. Does your JSON encoder crash with a stack overflow if you give it a recursive object? Does your parser handle escaped characters correctly? Does it reject unescaped characters that it should? How about unpaired surrogate characters? Does it overflow when the JSON encodes a number larger than 2^64? Or is it just a tiny eval wrapper with some sanity checks that are easily bypassed?

    – John Dvorak
    Apr 9 at 15:19






  • 4





    "The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight." I'll go out on a limb and claim that there's at least some APIs in that subset that are not supported by all possible implementations that ever existed (and nobody cares about WinPhone or Silvernight any more, not even microsoft). Using .NetStandard 2.0 as a target for a modern framework seems very prudent and not particularly limiting. Updating to 2.1 is a different story but there's no indication that they'd do so.

    – Voo
    Apr 9 at 15:43











  • Aside from future platforms probably supporting more rather than less, developing for all the things that might happen is incredibly expensive (and you're likely to miss something anyway). Instead, developing without reinventing the wheel will save more time than adapting to some new situation when that is needed is going to cost.

    – Jasper
    Apr 15 at 0:10


















51















We are forced to stay on the lowest API level of the framework (.net standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.




The reasoning here is rather backwards. Older, lower API levels are more likely to become obsolete and unsupported than newer ones. While I agree that staying a comfortable way behind the "cutting edge" is sensible to ensure a reasonable level of compatibility in the scenario you mention, never moving forward is beyond extreme.




We have implemented our own components for (de)serializing JSON, and are in the process of doing the same for JWT. This is available in a higher level of the framework API.
We have implemented a wrapper around the HTTP framework of the standard library because we don't want to take a dependency on the HTTP impelemntation of the standard library.
All of the code for mapping to/from XML is written "by hand", again for the same reason.




This is madness. Even if you don't want to use standard library functions for whatever reason, open source libraries exist with commercially compatible licenses that do all of the above. They've already been written, extensively tested from a functionality, security and API design point of view, and used extensively in many other projects.



If the worst happens and that project goes away, or stops being maintained, then you've got the code to build the library anyway, and you assign someone to maintain it. And you're likely still in a much better position than if you'd rolled your own, since in reality you'll have more tested, cleaner, more maintainable code to look after.



In the much more likely scenario that the project is maintained, and bugs or exploits are found in those libraries, you'll know about them so can do something about it - such as upgrading to a newer version free of charge, or patching your version with the fix if you've taken a copy.






share|improve this answer


















  • 3





    And even if you can't, switching to another library is still easier and better than rolling your own.

    – Lightness Races in Orbit
    Apr 9 at 23:45






  • 5





    Excellent point that lower level stuff dies faster. That's the whole point of establishing abstractions.

    – Lightness Races in Orbit
    Apr 9 at 23:45











  • "Older, lower API levels are more likely to become obsolete and unsupported than newer ones". Huh? The NetSTandards are built on top of each other as far as I know (meaning 2.0 is 1.3 + X). Also the Standards are simply that.. standards, not implementations. It makes no sense to talk about a standard becoming unsupported, at most specific implementations of that standard might be in the future (but see the earlier point why that's also not a concern). If your library doesn't need anything outside of NetStandard 1.3 there's absolutely no reason to change it to 2.0

    – Voo
    Apr 10 at 10:11



















11














On the whole these things are good for your customers. Even a popular open source library might be impossible for them to use for some reason.



For example, they may have signed a contract with their customers promising not to use open source products.



However, as you point out, these features are not without cost.



  • Time to market

  • Size of package

  • Performance

I would raise these downsides and talk with customers to find out if they really need the uber levels of compatibility you are offering.



If all the customers already use Json.NET for example, then using it in your product rather than your own deserialisation code, reduces its size and improves it.



If you introduce a second version of your product, one which uses third-party libraries as well as a compatible one you could judge the uptake on both. Will customers use the third parties to get the latest features a bit earlier, or stick with the 'compatible' version?






share|improve this answer




















  • 11





    Yes I obviously agree, and I would add "security" to your list. There's some potential that you might introduce a vulnerability in your code, especially with things like JSON/JWT, compared to well tested frameworks and definitely the standard library.

    – Bertus
    Apr 8 at 11:28











  • Yes, its hard to make the list because obviously things like security and performance could go both ways. But there is an obvious conflict of interest between finishing features and insuring internal components are fully featured/understood

    – Ewan
    Apr 8 at 11:38






  • 12





    "they may have signed a contract with their customers promising not to use open source products" - they're using .NET Standard, which is open source. It's a bad idea to sign that contract when you're basing your entire product on an open source framework.

    – Stephen
    Apr 9 at 1:02











  • And still people do it

    – Ewan
    Apr 9 at 1:02


















7














Short answer is that you should start introducing third-party dependencies. During your next stand-up meeting, tell everyone that the next week at work will be the most fun they have had in years -- they'll replace the JSON and XML components with open source, standard libraries solutions. Tell everyone that they have three days to replace the JSON component. Celebrate after it's done. Have a party. This is worth celebrating.






share|improve this answer


















  • 2





    This may be tongue in cheek but it's not unrealistic. I joined a company where a "senior" dev (senior by education only) had tasked a junior dev with writing a state machine library. It had five developer-months in it and it was still buggy, so I ripped it out and replaced it with a turnkey solution in a matter of a couple days.

    – TKK
    Apr 9 at 22:11


















0














Basically it all comes down to effort vs. risk.



By adding an additional dependency or update your framework or use higher level API, you lower your effort but you take up risk. So I would suggest doing a SWOT analysis.



  • Strengths: Less effort, because you don't have to code it yourself.

  • Weaknesses: It's not as custom designed for your special needs as a handcrafted solution.

  • Opportunities: Time to market is smaller. You might profit from external developments.

  • Threats: You might upset customers with additional dependencies.

As you can see the additional effort to develop a handcrafted solution is an investment into lowering your threats. Now you can make a strategic decision.






share|improve this answer






























    -2














    Split your component libraries into a "Core" set, that have no dependencies (essentially what you are doing now) and a "Common" set, that have dependencies on your "Core" and 3rd party libraries.



    That way if someone only wants "Core" functionality, they can have it.



    If someone wants "Common" functionality, they can have it.



    And you can manage what is "Core" versus "Common". You can add functionality more quickly to "Common", and move it to your own "Core" implementation if/when it makes sense to provide your own implementation.






    share|improve this answer





















      protected by gnat Apr 10 at 20:10



      Thank you for your interest in this question.
      Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



      Would you like to answer one of these unanswered questions instead?














      6 Answers
      6






      active

      oldest

      votes








      6 Answers
      6






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      93















      ... We are forced to stay on the lowest API level of the framework (.NET Standard) …




      This to me highlights the fact that, not only are you potentially restricting yourselves too much, you may also be heading for a nasty fall with your approach.



      .NET Standard is not, and never will be "the lowest API level of the framework". The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight.



      Depending on which version of .NET Standard you are targeting, you can end up with a very rich set of APIs that are compatible with .NET Framework, .NET Core, Mono, and Xamarin. And there are many third-party libraries that are .NET Standard compatible that will therefore work on all these platforms.



      Then there is .NET Standard 2.1, likely to be released in the Autumn of 2019. It will be supported by .NET Core, Mono and Xamarin. It will not be supported by any version of the .NET Framework, at least for the foreseeable future, and quite likely always. So in the near future, far from being "the lowest API level of the framework", .NET Standard will supersede the framework and have APIs that aren't supported by the latter.



      So be very careful with "The reasoning behind this is that a new platform could one day arrive that only supports that very low API level" as it's quite likely that new platforms will in fact support a higher level API than the old framework does.



      Then there's the issue of third-party libraries. JSON.NET for example is compatible with .NET Standard. Any library compatible with .NET Standard is guaranteed - API-wise - to work with all .NET implementations that are compatible with that version of .NET Standard. So you achieve no additional compatibility by not using it and creating your JSON library. You simply create more work for yourselves and incur unnecessary costs for your company.



      So yes, you definitely are taking this too far in my view.






      share|improve this answer




















      • 16





        "You simply create more work for yourselves and incur unnecessary costs for your company." - and security liabilities. Does your JSON encoder crash with a stack overflow if you give it a recursive object? Does your parser handle escaped characters correctly? Does it reject unescaped characters that it should? How about unpaired surrogate characters? Does it overflow when the JSON encodes a number larger than 2^64? Or is it just a tiny eval wrapper with some sanity checks that are easily bypassed?

        – John Dvorak
        Apr 9 at 15:19






      • 4





        "The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight." I'll go out on a limb and claim that there's at least some APIs in that subset that are not supported by all possible implementations that ever existed (and nobody cares about WinPhone or Silvernight any more, not even microsoft). Using .NetStandard 2.0 as a target for a modern framework seems very prudent and not particularly limiting. Updating to 2.1 is a different story but there's no indication that they'd do so.

        – Voo
        Apr 9 at 15:43











      • Aside from future platforms probably supporting more rather than less, developing for all the things that might happen is incredibly expensive (and you're likely to miss something anyway). Instead, developing without reinventing the wheel will save more time than adapting to some new situation when that is needed is going to cost.

        – Jasper
        Apr 15 at 0:10















      93















      ... We are forced to stay on the lowest API level of the framework (.NET Standard) …




      This to me highlights the fact that, not only are you potentially restricting yourselves too much, you may also be heading for a nasty fall with your approach.



      .NET Standard is not, and never will be "the lowest API level of the framework". The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight.



      Depending on which version of .NET Standard you are targeting, you can end up with a very rich set of APIs that are compatible with .NET Framework, .NET Core, Mono, and Xamarin. And there are many third-party libraries that are .NET Standard compatible that will therefore work on all these platforms.



      Then there is .NET Standard 2.1, likely to be released in the Autumn of 2019. It will be supported by .NET Core, Mono and Xamarin. It will not be supported by any version of the .NET Framework, at least for the foreseeable future, and quite likely always. So in the near future, far from being "the lowest API level of the framework", .NET Standard will supersede the framework and have APIs that aren't supported by the latter.



      So be very careful with "The reasoning behind this is that a new platform could one day arrive that only supports that very low API level" as it's quite likely that new platforms will in fact support a higher level API than the old framework does.



      Then there's the issue of third-party libraries. JSON.NET for example is compatible with .NET Standard. Any library compatible with .NET Standard is guaranteed - API-wise - to work with all .NET implementations that are compatible with that version of .NET Standard. So you achieve no additional compatibility by not using it and creating your JSON library. You simply create more work for yourselves and incur unnecessary costs for your company.



      So yes, you definitely are taking this too far in my view.






      share|improve this answer




















      • 16





        "You simply create more work for yourselves and incur unnecessary costs for your company." - and security liabilities. Does your JSON encoder crash with a stack overflow if you give it a recursive object? Does your parser handle escaped characters correctly? Does it reject unescaped characters that it should? How about unpaired surrogate characters? Does it overflow when the JSON encodes a number larger than 2^64? Or is it just a tiny eval wrapper with some sanity checks that are easily bypassed?

        – John Dvorak
        Apr 9 at 15:19






      • 4





        "The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight." I'll go out on a limb and claim that there's at least some APIs in that subset that are not supported by all possible implementations that ever existed (and nobody cares about WinPhone or Silvernight any more, not even microsoft). Using .NetStandard 2.0 as a target for a modern framework seems very prudent and not particularly limiting. Updating to 2.1 is a different story but there's no indication that they'd do so.

        – Voo
        Apr 9 at 15:43











      • Aside from future platforms probably supporting more rather than less, developing for all the things that might happen is incredibly expensive (and you're likely to miss something anyway). Instead, developing without reinventing the wheel will save more time than adapting to some new situation when that is needed is going to cost.

        – Jasper
        Apr 15 at 0:10













      93












      93








      93








      ... We are forced to stay on the lowest API level of the framework (.NET Standard) …




      This to me highlights the fact that, not only are you potentially restricting yourselves too much, you may also be heading for a nasty fall with your approach.



      .NET Standard is not, and never will be "the lowest API level of the framework". The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight.



      Depending on which version of .NET Standard you are targeting, you can end up with a very rich set of APIs that are compatible with .NET Framework, .NET Core, Mono, and Xamarin. And there are many third-party libraries that are .NET Standard compatible that will therefore work on all these platforms.



      Then there is .NET Standard 2.1, likely to be released in the Autumn of 2019. It will be supported by .NET Core, Mono and Xamarin. It will not be supported by any version of the .NET Framework, at least for the foreseeable future, and quite likely always. So in the near future, far from being "the lowest API level of the framework", .NET Standard will supersede the framework and have APIs that aren't supported by the latter.



      So be very careful with "The reasoning behind this is that a new platform could one day arrive that only supports that very low API level" as it's quite likely that new platforms will in fact support a higher level API than the old framework does.



      Then there's the issue of third-party libraries. JSON.NET for example is compatible with .NET Standard. Any library compatible with .NET Standard is guaranteed - API-wise - to work with all .NET implementations that are compatible with that version of .NET Standard. So you achieve no additional compatibility by not using it and creating your JSON library. You simply create more work for yourselves and incur unnecessary costs for your company.



      So yes, you definitely are taking this too far in my view.






      share|improve this answer
















      ... We are forced to stay on the lowest API level of the framework (.NET Standard) …




      This to me highlights the fact that, not only are you potentially restricting yourselves too much, you may also be heading for a nasty fall with your approach.



      .NET Standard is not, and never will be "the lowest API level of the framework". The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight.



      Depending on which version of .NET Standard you are targeting, you can end up with a very rich set of APIs that are compatible with .NET Framework, .NET Core, Mono, and Xamarin. And there are many third-party libraries that are .NET Standard compatible that will therefore work on all these platforms.



      Then there is .NET Standard 2.1, likely to be released in the Autumn of 2019. It will be supported by .NET Core, Mono and Xamarin. It will not be supported by any version of the .NET Framework, at least for the foreseeable future, and quite likely always. So in the near future, far from being "the lowest API level of the framework", .NET Standard will supersede the framework and have APIs that aren't supported by the latter.



      So be very careful with "The reasoning behind this is that a new platform could one day arrive that only supports that very low API level" as it's quite likely that new platforms will in fact support a higher level API than the old framework does.



      Then there's the issue of third-party libraries. JSON.NET for example is compatible with .NET Standard. Any library compatible with .NET Standard is guaranteed - API-wise - to work with all .NET implementations that are compatible with that version of .NET Standard. So you achieve no additional compatibility by not using it and creating your JSON library. You simply create more work for yourselves and incur unnecessary costs for your company.



      So yes, you definitely are taking this too far in my view.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Apr 9 at 13:51









      Peter Mortensen

      1,11521114




      1,11521114










      answered Apr 8 at 11:46









      David ArnoDavid Arno

      29.6k75995




      29.6k75995







      • 16





        "You simply create more work for yourselves and incur unnecessary costs for your company." - and security liabilities. Does your JSON encoder crash with a stack overflow if you give it a recursive object? Does your parser handle escaped characters correctly? Does it reject unescaped characters that it should? How about unpaired surrogate characters? Does it overflow when the JSON encodes a number larger than 2^64? Or is it just a tiny eval wrapper with some sanity checks that are easily bypassed?

        – John Dvorak
        Apr 9 at 15:19






      • 4





        "The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight." I'll go out on a limb and claim that there's at least some APIs in that subset that are not supported by all possible implementations that ever existed (and nobody cares about WinPhone or Silvernight any more, not even microsoft). Using .NetStandard 2.0 as a target for a modern framework seems very prudent and not particularly limiting. Updating to 2.1 is a different story but there's no indication that they'd do so.

        – Voo
        Apr 9 at 15:43











      • Aside from future platforms probably supporting more rather than less, developing for all the things that might happen is incredibly expensive (and you're likely to miss something anyway). Instead, developing without reinventing the wheel will save more time than adapting to some new situation when that is needed is going to cost.

        – Jasper
        Apr 15 at 0:10












      • 16





        "You simply create more work for yourselves and incur unnecessary costs for your company." - and security liabilities. Does your JSON encoder crash with a stack overflow if you give it a recursive object? Does your parser handle escaped characters correctly? Does it reject unescaped characters that it should? How about unpaired surrogate characters? Does it overflow when the JSON encodes a number larger than 2^64? Or is it just a tiny eval wrapper with some sanity checks that are easily bypassed?

        – John Dvorak
        Apr 9 at 15:19






      • 4





        "The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight." I'll go out on a limb and claim that there's at least some APIs in that subset that are not supported by all possible implementations that ever existed (and nobody cares about WinPhone or Silvernight any more, not even microsoft). Using .NetStandard 2.0 as a target for a modern framework seems very prudent and not particularly limiting. Updating to 2.1 is a different story but there's no indication that they'd do so.

        – Voo
        Apr 9 at 15:43











      • Aside from future platforms probably supporting more rather than less, developing for all the things that might happen is incredibly expensive (and you're likely to miss something anyway). Instead, developing without reinventing the wheel will save more time than adapting to some new situation when that is needed is going to cost.

        – Jasper
        Apr 15 at 0:10







      16




      16





      "You simply create more work for yourselves and incur unnecessary costs for your company." - and security liabilities. Does your JSON encoder crash with a stack overflow if you give it a recursive object? Does your parser handle escaped characters correctly? Does it reject unescaped characters that it should? How about unpaired surrogate characters? Does it overflow when the JSON encodes a number larger than 2^64? Or is it just a tiny eval wrapper with some sanity checks that are easily bypassed?

      – John Dvorak
      Apr 9 at 15:19





      "You simply create more work for yourselves and incur unnecessary costs for your company." - and security liabilities. Does your JSON encoder crash with a stack overflow if you give it a recursive object? Does your parser handle escaped characters correctly? Does it reject unescaped characters that it should? How about unpaired surrogate characters? Does it overflow when the JSON encodes a number larger than 2^64? Or is it just a tiny eval wrapper with some sanity checks that are easily bypassed?

      – John Dvorak
      Apr 9 at 15:19




      4




      4





      "The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight." I'll go out on a limb and claim that there's at least some APIs in that subset that are not supported by all possible implementations that ever existed (and nobody cares about WinPhone or Silvernight any more, not even microsoft). Using .NetStandard 2.0 as a target for a modern framework seems very prudent and not particularly limiting. Updating to 2.1 is a different story but there's no indication that they'd do so.

      – Voo
      Apr 9 at 15:43





      "The most restrictive set of APIs for .NET is achieved by creating a portable class library that targets Windows Phone and Silverlight." I'll go out on a limb and claim that there's at least some APIs in that subset that are not supported by all possible implementations that ever existed (and nobody cares about WinPhone or Silvernight any more, not even microsoft). Using .NetStandard 2.0 as a target for a modern framework seems very prudent and not particularly limiting. Updating to 2.1 is a different story but there's no indication that they'd do so.

      – Voo
      Apr 9 at 15:43













      Aside from future platforms probably supporting more rather than less, developing for all the things that might happen is incredibly expensive (and you're likely to miss something anyway). Instead, developing without reinventing the wheel will save more time than adapting to some new situation when that is needed is going to cost.

      – Jasper
      Apr 15 at 0:10





      Aside from future platforms probably supporting more rather than less, developing for all the things that might happen is incredibly expensive (and you're likely to miss something anyway). Instead, developing without reinventing the wheel will save more time than adapting to some new situation when that is needed is going to cost.

      – Jasper
      Apr 15 at 0:10













      51















      We are forced to stay on the lowest API level of the framework (.net standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.




      The reasoning here is rather backwards. Older, lower API levels are more likely to become obsolete and unsupported than newer ones. While I agree that staying a comfortable way behind the "cutting edge" is sensible to ensure a reasonable level of compatibility in the scenario you mention, never moving forward is beyond extreme.




      We have implemented our own components for (de)serializing JSON, and are in the process of doing the same for JWT. This is available in a higher level of the framework API.
      We have implemented a wrapper around the HTTP framework of the standard library because we don't want to take a dependency on the HTTP impelemntation of the standard library.
      All of the code for mapping to/from XML is written "by hand", again for the same reason.




      This is madness. Even if you don't want to use standard library functions for whatever reason, open source libraries exist with commercially compatible licenses that do all of the above. They've already been written, extensively tested from a functionality, security and API design point of view, and used extensively in many other projects.



      If the worst happens and that project goes away, or stops being maintained, then you've got the code to build the library anyway, and you assign someone to maintain it. And you're likely still in a much better position than if you'd rolled your own, since in reality you'll have more tested, cleaner, more maintainable code to look after.



      In the much more likely scenario that the project is maintained, and bugs or exploits are found in those libraries, you'll know about them so can do something about it - such as upgrading to a newer version free of charge, or patching your version with the fix if you've taken a copy.






      share|improve this answer


















      • 3





        And even if you can't, switching to another library is still easier and better than rolling your own.

        – Lightness Races in Orbit
        Apr 9 at 23:45






      • 5





        Excellent point that lower level stuff dies faster. That's the whole point of establishing abstractions.

        – Lightness Races in Orbit
        Apr 9 at 23:45











      • "Older, lower API levels are more likely to become obsolete and unsupported than newer ones". Huh? The NetSTandards are built on top of each other as far as I know (meaning 2.0 is 1.3 + X). Also the Standards are simply that.. standards, not implementations. It makes no sense to talk about a standard becoming unsupported, at most specific implementations of that standard might be in the future (but see the earlier point why that's also not a concern). If your library doesn't need anything outside of NetStandard 1.3 there's absolutely no reason to change it to 2.0

        – Voo
        Apr 10 at 10:11
















      51















      We are forced to stay on the lowest API level of the framework (.net standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.




      The reasoning here is rather backwards. Older, lower API levels are more likely to become obsolete and unsupported than newer ones. While I agree that staying a comfortable way behind the "cutting edge" is sensible to ensure a reasonable level of compatibility in the scenario you mention, never moving forward is beyond extreme.




      We have implemented our own components for (de)serializing JSON, and are in the process of doing the same for JWT. This is available in a higher level of the framework API.
      We have implemented a wrapper around the HTTP framework of the standard library because we don't want to take a dependency on the HTTP impelemntation of the standard library.
      All of the code for mapping to/from XML is written "by hand", again for the same reason.




      This is madness. Even if you don't want to use standard library functions for whatever reason, open source libraries exist with commercially compatible licenses that do all of the above. They've already been written, extensively tested from a functionality, security and API design point of view, and used extensively in many other projects.



      If the worst happens and that project goes away, or stops being maintained, then you've got the code to build the library anyway, and you assign someone to maintain it. And you're likely still in a much better position than if you'd rolled your own, since in reality you'll have more tested, cleaner, more maintainable code to look after.



      In the much more likely scenario that the project is maintained, and bugs or exploits are found in those libraries, you'll know about them so can do something about it - such as upgrading to a newer version free of charge, or patching your version with the fix if you've taken a copy.






      share|improve this answer


















      • 3





        And even if you can't, switching to another library is still easier and better than rolling your own.

        – Lightness Races in Orbit
        Apr 9 at 23:45






      • 5





        Excellent point that lower level stuff dies faster. That's the whole point of establishing abstractions.

        – Lightness Races in Orbit
        Apr 9 at 23:45











      • "Older, lower API levels are more likely to become obsolete and unsupported than newer ones". Huh? The NetSTandards are built on top of each other as far as I know (meaning 2.0 is 1.3 + X). Also the Standards are simply that.. standards, not implementations. It makes no sense to talk about a standard becoming unsupported, at most specific implementations of that standard might be in the future (but see the earlier point why that's also not a concern). If your library doesn't need anything outside of NetStandard 1.3 there's absolutely no reason to change it to 2.0

        – Voo
        Apr 10 at 10:11














      51












      51








      51








      We are forced to stay on the lowest API level of the framework (.net standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.




      The reasoning here is rather backwards. Older, lower API levels are more likely to become obsolete and unsupported than newer ones. While I agree that staying a comfortable way behind the "cutting edge" is sensible to ensure a reasonable level of compatibility in the scenario you mention, never moving forward is beyond extreme.




      We have implemented our own components for (de)serializing JSON, and are in the process of doing the same for JWT. This is available in a higher level of the framework API.
      We have implemented a wrapper around the HTTP framework of the standard library because we don't want to take a dependency on the HTTP impelemntation of the standard library.
      All of the code for mapping to/from XML is written "by hand", again for the same reason.




      This is madness. Even if you don't want to use standard library functions for whatever reason, open source libraries exist with commercially compatible licenses that do all of the above. They've already been written, extensively tested from a functionality, security and API design point of view, and used extensively in many other projects.



      If the worst happens and that project goes away, or stops being maintained, then you've got the code to build the library anyway, and you assign someone to maintain it. And you're likely still in a much better position than if you'd rolled your own, since in reality you'll have more tested, cleaner, more maintainable code to look after.



      In the much more likely scenario that the project is maintained, and bugs or exploits are found in those libraries, you'll know about them so can do something about it - such as upgrading to a newer version free of charge, or patching your version with the fix if you've taken a copy.






      share|improve this answer














      We are forced to stay on the lowest API level of the framework (.net standard). The reasoning behind this is that a new platform could one day arrive that only supports that very low API level.




      The reasoning here is rather backwards. Older, lower API levels are more likely to become obsolete and unsupported than newer ones. While I agree that staying a comfortable way behind the "cutting edge" is sensible to ensure a reasonable level of compatibility in the scenario you mention, never moving forward is beyond extreme.




      We have implemented our own components for (de)serializing JSON, and are in the process of doing the same for JWT. This is available in a higher level of the framework API.
      We have implemented a wrapper around the HTTP framework of the standard library because we don't want to take a dependency on the HTTP impelemntation of the standard library.
      All of the code for mapping to/from XML is written "by hand", again for the same reason.




      This is madness. Even if you don't want to use standard library functions for whatever reason, open source libraries exist with commercially compatible licenses that do all of the above. They've already been written, extensively tested from a functionality, security and API design point of view, and used extensively in many other projects.



      If the worst happens and that project goes away, or stops being maintained, then you've got the code to build the library anyway, and you assign someone to maintain it. And you're likely still in a much better position than if you'd rolled your own, since in reality you'll have more tested, cleaner, more maintainable code to look after.



      In the much more likely scenario that the project is maintained, and bugs or exploits are found in those libraries, you'll know about them so can do something about it - such as upgrading to a newer version free of charge, or patching your version with the fix if you've taken a copy.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Apr 8 at 13:59









      berry120berry120

      1,6821417




      1,6821417







      • 3





        And even if you can't, switching to another library is still easier and better than rolling your own.

        – Lightness Races in Orbit
        Apr 9 at 23:45






      • 5





        Excellent point that lower level stuff dies faster. That's the whole point of establishing abstractions.

        – Lightness Races in Orbit
        Apr 9 at 23:45











      • "Older, lower API levels are more likely to become obsolete and unsupported than newer ones". Huh? The NetSTandards are built on top of each other as far as I know (meaning 2.0 is 1.3 + X). Also the Standards are simply that.. standards, not implementations. It makes no sense to talk about a standard becoming unsupported, at most specific implementations of that standard might be in the future (but see the earlier point why that's also not a concern). If your library doesn't need anything outside of NetStandard 1.3 there's absolutely no reason to change it to 2.0

        – Voo
        Apr 10 at 10:11













      • 3





        And even if you can't, switching to another library is still easier and better than rolling your own.

        – Lightness Races in Orbit
        Apr 9 at 23:45






      • 5





        Excellent point that lower level stuff dies faster. That's the whole point of establishing abstractions.

        – Lightness Races in Orbit
        Apr 9 at 23:45











      • "Older, lower API levels are more likely to become obsolete and unsupported than newer ones". Huh? The NetSTandards are built on top of each other as far as I know (meaning 2.0 is 1.3 + X). Also the Standards are simply that.. standards, not implementations. It makes no sense to talk about a standard becoming unsupported, at most specific implementations of that standard might be in the future (but see the earlier point why that's also not a concern). If your library doesn't need anything outside of NetStandard 1.3 there's absolutely no reason to change it to 2.0

        – Voo
        Apr 10 at 10:11








      3




      3





      And even if you can't, switching to another library is still easier and better than rolling your own.

      – Lightness Races in Orbit
      Apr 9 at 23:45





      And even if you can't, switching to another library is still easier and better than rolling your own.

      – Lightness Races in Orbit
      Apr 9 at 23:45




      5




      5





      Excellent point that lower level stuff dies faster. That's the whole point of establishing abstractions.

      – Lightness Races in Orbit
      Apr 9 at 23:45





      Excellent point that lower level stuff dies faster. That's the whole point of establishing abstractions.

      – Lightness Races in Orbit
      Apr 9 at 23:45













      "Older, lower API levels are more likely to become obsolete and unsupported than newer ones". Huh? The NetSTandards are built on top of each other as far as I know (meaning 2.0 is 1.3 + X). Also the Standards are simply that.. standards, not implementations. It makes no sense to talk about a standard becoming unsupported, at most specific implementations of that standard might be in the future (but see the earlier point why that's also not a concern). If your library doesn't need anything outside of NetStandard 1.3 there's absolutely no reason to change it to 2.0

      – Voo
      Apr 10 at 10:11






      "Older, lower API levels are more likely to become obsolete and unsupported than newer ones". Huh? The NetSTandards are built on top of each other as far as I know (meaning 2.0 is 1.3 + X). Also the Standards are simply that.. standards, not implementations. It makes no sense to talk about a standard becoming unsupported, at most specific implementations of that standard might be in the future (but see the earlier point why that's also not a concern). If your library doesn't need anything outside of NetStandard 1.3 there's absolutely no reason to change it to 2.0

      – Voo
      Apr 10 at 10:11












      11














      On the whole these things are good for your customers. Even a popular open source library might be impossible for them to use for some reason.



      For example, they may have signed a contract with their customers promising not to use open source products.



      However, as you point out, these features are not without cost.



      • Time to market

      • Size of package

      • Performance

      I would raise these downsides and talk with customers to find out if they really need the uber levels of compatibility you are offering.



      If all the customers already use Json.NET for example, then using it in your product rather than your own deserialisation code, reduces its size and improves it.



      If you introduce a second version of your product, one which uses third-party libraries as well as a compatible one you could judge the uptake on both. Will customers use the third parties to get the latest features a bit earlier, or stick with the 'compatible' version?






      share|improve this answer




















      • 11





        Yes I obviously agree, and I would add "security" to your list. There's some potential that you might introduce a vulnerability in your code, especially with things like JSON/JWT, compared to well tested frameworks and definitely the standard library.

        – Bertus
        Apr 8 at 11:28











      • Yes, its hard to make the list because obviously things like security and performance could go both ways. But there is an obvious conflict of interest between finishing features and insuring internal components are fully featured/understood

        – Ewan
        Apr 8 at 11:38






      • 12





        "they may have signed a contract with their customers promising not to use open source products" - they're using .NET Standard, which is open source. It's a bad idea to sign that contract when you're basing your entire product on an open source framework.

        – Stephen
        Apr 9 at 1:02











      • And still people do it

        – Ewan
        Apr 9 at 1:02















      11














      On the whole these things are good for your customers. Even a popular open source library might be impossible for them to use for some reason.



      For example, they may have signed a contract with their customers promising not to use open source products.



      However, as you point out, these features are not without cost.



      • Time to market

      • Size of package

      • Performance

      I would raise these downsides and talk with customers to find out if they really need the uber levels of compatibility you are offering.



      If all the customers already use Json.NET for example, then using it in your product rather than your own deserialisation code, reduces its size and improves it.



      If you introduce a second version of your product, one which uses third-party libraries as well as a compatible one you could judge the uptake on both. Will customers use the third parties to get the latest features a bit earlier, or stick with the 'compatible' version?






      share|improve this answer




















      • 11





        Yes I obviously agree, and I would add "security" to your list. There's some potential that you might introduce a vulnerability in your code, especially with things like JSON/JWT, compared to well tested frameworks and definitely the standard library.

        – Bertus
        Apr 8 at 11:28











      • Yes, its hard to make the list because obviously things like security and performance could go both ways. But there is an obvious conflict of interest between finishing features and insuring internal components are fully featured/understood

        – Ewan
        Apr 8 at 11:38






      • 12





        "they may have signed a contract with their customers promising not to use open source products" - they're using .NET Standard, which is open source. It's a bad idea to sign that contract when you're basing your entire product on an open source framework.

        – Stephen
        Apr 9 at 1:02











      • And still people do it

        – Ewan
        Apr 9 at 1:02













      11












      11








      11







      On the whole these things are good for your customers. Even a popular open source library might be impossible for them to use for some reason.



      For example, they may have signed a contract with their customers promising not to use open source products.



      However, as you point out, these features are not without cost.



      • Time to market

      • Size of package

      • Performance

      I would raise these downsides and talk with customers to find out if they really need the uber levels of compatibility you are offering.



      If all the customers already use Json.NET for example, then using it in your product rather than your own deserialisation code, reduces its size and improves it.



      If you introduce a second version of your product, one which uses third-party libraries as well as a compatible one you could judge the uptake on both. Will customers use the third parties to get the latest features a bit earlier, or stick with the 'compatible' version?






      share|improve this answer















      On the whole these things are good for your customers. Even a popular open source library might be impossible for them to use for some reason.



      For example, they may have signed a contract with their customers promising not to use open source products.



      However, as you point out, these features are not without cost.



      • Time to market

      • Size of package

      • Performance

      I would raise these downsides and talk with customers to find out if they really need the uber levels of compatibility you are offering.



      If all the customers already use Json.NET for example, then using it in your product rather than your own deserialisation code, reduces its size and improves it.



      If you introduce a second version of your product, one which uses third-party libraries as well as a compatible one you could judge the uptake on both. Will customers use the third parties to get the latest features a bit earlier, or stick with the 'compatible' version?







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Apr 9 at 15:11









      Peter Mortensen

      1,11521114




      1,11521114










      answered Apr 8 at 11:09









      EwanEwan

      44.7k337101




      44.7k337101







      • 11





        Yes I obviously agree, and I would add "security" to your list. There's some potential that you might introduce a vulnerability in your code, especially with things like JSON/JWT, compared to well tested frameworks and definitely the standard library.

        – Bertus
        Apr 8 at 11:28











      • Yes, its hard to make the list because obviously things like security and performance could go both ways. But there is an obvious conflict of interest between finishing features and insuring internal components are fully featured/understood

        – Ewan
        Apr 8 at 11:38






      • 12





        "they may have signed a contract with their customers promising not to use open source products" - they're using .NET Standard, which is open source. It's a bad idea to sign that contract when you're basing your entire product on an open source framework.

        – Stephen
        Apr 9 at 1:02











      • And still people do it

        – Ewan
        Apr 9 at 1:02












      • 11





        Yes I obviously agree, and I would add "security" to your list. There's some potential that you might introduce a vulnerability in your code, especially with things like JSON/JWT, compared to well tested frameworks and definitely the standard library.

        – Bertus
        Apr 8 at 11:28











      • Yes, its hard to make the list because obviously things like security and performance could go both ways. But there is an obvious conflict of interest between finishing features and insuring internal components are fully featured/understood

        – Ewan
        Apr 8 at 11:38






      • 12





        "they may have signed a contract with their customers promising not to use open source products" - they're using .NET Standard, which is open source. It's a bad idea to sign that contract when you're basing your entire product on an open source framework.

        – Stephen
        Apr 9 at 1:02











      • And still people do it

        – Ewan
        Apr 9 at 1:02







      11




      11





      Yes I obviously agree, and I would add "security" to your list. There's some potential that you might introduce a vulnerability in your code, especially with things like JSON/JWT, compared to well tested frameworks and definitely the standard library.

      – Bertus
      Apr 8 at 11:28





      Yes I obviously agree, and I would add "security" to your list. There's some potential that you might introduce a vulnerability in your code, especially with things like JSON/JWT, compared to well tested frameworks and definitely the standard library.

      – Bertus
      Apr 8 at 11:28













      Yes, its hard to make the list because obviously things like security and performance could go both ways. But there is an obvious conflict of interest between finishing features and insuring internal components are fully featured/understood

      – Ewan
      Apr 8 at 11:38





      Yes, its hard to make the list because obviously things like security and performance could go both ways. But there is an obvious conflict of interest between finishing features and insuring internal components are fully featured/understood

      – Ewan
      Apr 8 at 11:38




      12




      12





      "they may have signed a contract with their customers promising not to use open source products" - they're using .NET Standard, which is open source. It's a bad idea to sign that contract when you're basing your entire product on an open source framework.

      – Stephen
      Apr 9 at 1:02





      "they may have signed a contract with their customers promising not to use open source products" - they're using .NET Standard, which is open source. It's a bad idea to sign that contract when you're basing your entire product on an open source framework.

      – Stephen
      Apr 9 at 1:02













      And still people do it

      – Ewan
      Apr 9 at 1:02





      And still people do it

      – Ewan
      Apr 9 at 1:02











      7














      Short answer is that you should start introducing third-party dependencies. During your next stand-up meeting, tell everyone that the next week at work will be the most fun they have had in years -- they'll replace the JSON and XML components with open source, standard libraries solutions. Tell everyone that they have three days to replace the JSON component. Celebrate after it's done. Have a party. This is worth celebrating.






      share|improve this answer


















      • 2





        This may be tongue in cheek but it's not unrealistic. I joined a company where a "senior" dev (senior by education only) had tasked a junior dev with writing a state machine library. It had five developer-months in it and it was still buggy, so I ripped it out and replaced it with a turnkey solution in a matter of a couple days.

        – TKK
        Apr 9 at 22:11















      7














      Short answer is that you should start introducing third-party dependencies. During your next stand-up meeting, tell everyone that the next week at work will be the most fun they have had in years -- they'll replace the JSON and XML components with open source, standard libraries solutions. Tell everyone that they have three days to replace the JSON component. Celebrate after it's done. Have a party. This is worth celebrating.






      share|improve this answer


















      • 2





        This may be tongue in cheek but it's not unrealistic. I joined a company where a "senior" dev (senior by education only) had tasked a junior dev with writing a state machine library. It had five developer-months in it and it was still buggy, so I ripped it out and replaced it with a turnkey solution in a matter of a couple days.

        – TKK
        Apr 9 at 22:11













      7












      7








      7







      Short answer is that you should start introducing third-party dependencies. During your next stand-up meeting, tell everyone that the next week at work will be the most fun they have had in years -- they'll replace the JSON and XML components with open source, standard libraries solutions. Tell everyone that they have three days to replace the JSON component. Celebrate after it's done. Have a party. This is worth celebrating.






      share|improve this answer













      Short answer is that you should start introducing third-party dependencies. During your next stand-up meeting, tell everyone that the next week at work will be the most fun they have had in years -- they'll replace the JSON and XML components with open source, standard libraries solutions. Tell everyone that they have three days to replace the JSON component. Celebrate after it's done. Have a party. This is worth celebrating.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Apr 9 at 2:23









      Double Vision Stout Fat HeavyDouble Vision Stout Fat Heavy

      792




      792







      • 2





        This may be tongue in cheek but it's not unrealistic. I joined a company where a "senior" dev (senior by education only) had tasked a junior dev with writing a state machine library. It had five developer-months in it and it was still buggy, so I ripped it out and replaced it with a turnkey solution in a matter of a couple days.

        – TKK
        Apr 9 at 22:11












      • 2





        This may be tongue in cheek but it's not unrealistic. I joined a company where a "senior" dev (senior by education only) had tasked a junior dev with writing a state machine library. It had five developer-months in it and it was still buggy, so I ripped it out and replaced it with a turnkey solution in a matter of a couple days.

        – TKK
        Apr 9 at 22:11







      2




      2





      This may be tongue in cheek but it's not unrealistic. I joined a company where a "senior" dev (senior by education only) had tasked a junior dev with writing a state machine library. It had five developer-months in it and it was still buggy, so I ripped it out and replaced it with a turnkey solution in a matter of a couple days.

      – TKK
      Apr 9 at 22:11





      This may be tongue in cheek but it's not unrealistic. I joined a company where a "senior" dev (senior by education only) had tasked a junior dev with writing a state machine library. It had five developer-months in it and it was still buggy, so I ripped it out and replaced it with a turnkey solution in a matter of a couple days.

      – TKK
      Apr 9 at 22:11











      0














      Basically it all comes down to effort vs. risk.



      By adding an additional dependency or update your framework or use higher level API, you lower your effort but you take up risk. So I would suggest doing a SWOT analysis.



      • Strengths: Less effort, because you don't have to code it yourself.

      • Weaknesses: It's not as custom designed for your special needs as a handcrafted solution.

      • Opportunities: Time to market is smaller. You might profit from external developments.

      • Threats: You might upset customers with additional dependencies.

      As you can see the additional effort to develop a handcrafted solution is an investment into lowering your threats. Now you can make a strategic decision.






      share|improve this answer



























        0














        Basically it all comes down to effort vs. risk.



        By adding an additional dependency or update your framework or use higher level API, you lower your effort but you take up risk. So I would suggest doing a SWOT analysis.



        • Strengths: Less effort, because you don't have to code it yourself.

        • Weaknesses: It's not as custom designed for your special needs as a handcrafted solution.

        • Opportunities: Time to market is smaller. You might profit from external developments.

        • Threats: You might upset customers with additional dependencies.

        As you can see the additional effort to develop a handcrafted solution is an investment into lowering your threats. Now you can make a strategic decision.






        share|improve this answer

























          0












          0








          0







          Basically it all comes down to effort vs. risk.



          By adding an additional dependency or update your framework or use higher level API, you lower your effort but you take up risk. So I would suggest doing a SWOT analysis.



          • Strengths: Less effort, because you don't have to code it yourself.

          • Weaknesses: It's not as custom designed for your special needs as a handcrafted solution.

          • Opportunities: Time to market is smaller. You might profit from external developments.

          • Threats: You might upset customers with additional dependencies.

          As you can see the additional effort to develop a handcrafted solution is an investment into lowering your threats. Now you can make a strategic decision.






          share|improve this answer













          Basically it all comes down to effort vs. risk.



          By adding an additional dependency or update your framework or use higher level API, you lower your effort but you take up risk. So I would suggest doing a SWOT analysis.



          • Strengths: Less effort, because you don't have to code it yourself.

          • Weaknesses: It's not as custom designed for your special needs as a handcrafted solution.

          • Opportunities: Time to market is smaller. You might profit from external developments.

          • Threats: You might upset customers with additional dependencies.

          As you can see the additional effort to develop a handcrafted solution is an investment into lowering your threats. Now you can make a strategic decision.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Apr 8 at 13:52









          Dominic HoferDominic Hofer

          1324




          1324





















              -2














              Split your component libraries into a "Core" set, that have no dependencies (essentially what you are doing now) and a "Common" set, that have dependencies on your "Core" and 3rd party libraries.



              That way if someone only wants "Core" functionality, they can have it.



              If someone wants "Common" functionality, they can have it.



              And you can manage what is "Core" versus "Common". You can add functionality more quickly to "Common", and move it to your own "Core" implementation if/when it makes sense to provide your own implementation.






              share|improve this answer



























                -2














                Split your component libraries into a "Core" set, that have no dependencies (essentially what you are doing now) and a "Common" set, that have dependencies on your "Core" and 3rd party libraries.



                That way if someone only wants "Core" functionality, they can have it.



                If someone wants "Common" functionality, they can have it.



                And you can manage what is "Core" versus "Common". You can add functionality more quickly to "Common", and move it to your own "Core" implementation if/when it makes sense to provide your own implementation.






                share|improve this answer

























                  -2












                  -2








                  -2







                  Split your component libraries into a "Core" set, that have no dependencies (essentially what you are doing now) and a "Common" set, that have dependencies on your "Core" and 3rd party libraries.



                  That way if someone only wants "Core" functionality, they can have it.



                  If someone wants "Common" functionality, they can have it.



                  And you can manage what is "Core" versus "Common". You can add functionality more quickly to "Common", and move it to your own "Core" implementation if/when it makes sense to provide your own implementation.






                  share|improve this answer













                  Split your component libraries into a "Core" set, that have no dependencies (essentially what you are doing now) and a "Common" set, that have dependencies on your "Core" and 3rd party libraries.



                  That way if someone only wants "Core" functionality, they can have it.



                  If someone wants "Common" functionality, they can have it.



                  And you can manage what is "Core" versus "Common". You can add functionality more quickly to "Common", and move it to your own "Core" implementation if/when it makes sense to provide your own implementation.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Apr 10 at 16:11









                  Turtle1363Turtle1363

                  972




                  972















                      protected by gnat Apr 10 at 20:10



                      Thank you for your interest in this question.
                      Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



                      Would you like to answer one of these unanswered questions instead?



                      Popular posts from this blog

                      Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

                      Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

                      Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?